Artificially Intelligent Bullies: Dealing with Deepfakes in K–12 Schools

Brian A. Jackson, Melissa Kay Diliberti, Pauline Moore

ResearchPublished Sep 24, 2025

sad girl suffering from social media bullying from group of school children

Photo by Synthex/Adobe Stock

With the spread of artificial intelligence (AI) tools (from large language models and chatbots to image generators), K–12 schools face the challenge of dealing with both the good and the bad of such technology.⁠[1] Although the effects of AI on schools are still emerging, one area in which AI has already changed the game is cyberbullying. Specifically, students’ ability to use these tools to create fake, often inappropriate images and video of students or school staff—called deepfakes—has altered the school safety landscape in a significant way.⁠[2]

Sharing online images with the intent to cyberbully has been a concern for K–12 schools for some time. Sexting, for instance, in which youth consensually share explicit pictures of themselves with others, has been a growing concern nationwide as images have later been shared widely and caused serious harm to the students involved.⁠[3] Although a core response to sexting has been to teach students not to send any such pictures, AI’s capability to create deepfakes means that even students who follow that advice could still experience bullying when others create explicit imagery that is not real.

These incidents, often described as a new type of bullying, are a growing concern for schools. And the images can be very convincing: A survey conducted by the EdWeek Research Center in September 2024 showed that 67 percent of school staff “believed that their students had been misled by a deepfake,” and 50 percent said that teachers or administrators had been similarly deceived.⁠[4]

K–12 School Leader Experience with Deepfake Cyberbullying

How prevalent are deepfakes in cyberbullying incidents? In a RAND American School Leader Panel survey conducted with a nationally representative sample of K–12 school principals fielded in October 2024,⁠[5] we found that 13 percent of principals reported incidents of bullying that involved AI-generated deepfakes during the 2023–2024 and 2024–2025 school years (see Figure 1).

Such incidents were significantly more common in middle and high schools: 22 percent of high school principals and 20 percent of middle school principals reported such cases, compared with 8 percent of elementary school principals. For these older student populations, essentially one in five schools has had to deal with recent bullying and victimization using AI-generated images or video. Other surveys of students and teachers have found even higher rates of deepfake incidents in K–12 schools.⁠[6]

Figure 1. Percentage of Schools Reporting Bullying via AI-Generated Deepfakes, 2023–2024 and 2024–2025 School Years

bar graph showing percentage of schools that reported bullying via AI Deepfakes across elementary, middle, and high school levels.
  • All schools: 13 percent
  • Elementary school (reference group): 8 percent
  • Middle school: 20 percent*
  • High school: 22 percent*

NOTE: This figure depicts response data from the following survey question administered to school principals using the RAND American School Leader Panel in October 2024: “At any point over the last school year (2023–2024) and/or in this school year (2024–2025), has your school experienced incidents of bullying using artificial intelligence image creators, deep fakes, or other tools used to create false images/photos or false online identities/social media accounts for students or staff?” (n = 957). An asterisk (*) indicates that the percentage of secondary (middle or high school) principals who indicated that their schools experienced bullying via AI-generated deepfakes is statistically significantly different from the percentage of elementary principals who responded similarly.

How Schools Are Responding

Even though deepfake-related incidents are still not an everyday part of bullying, principals’ responses to our surveys suggest that schools are taking such incidents seriously. Among the schools we surveyed that had experienced such incidents (see Figure 2),

  • 79 percent took disciplinary actions against those involved
  • 66 percent referred the incidents to law enforcement—involving law enforcement was more likely if the principal reported that their school had school resource officers or other law enforcement personnel present at their school
  • 47 percent provided education and training to staff and students on recognizing deepfakes and responsibly using AI tools.

Interestingly, only 23 percent of schools represented in our survey reported updating their policies to include specific clauses about AI misuse. This could indicate that existing policies already address these issues or that schools are still in the process of developing AI-specific guidelines.

Figure 2. Schools’ Responses to Deepfake-Related Incidents, as of Fall 2024

bar chart showing various school responses to Deepfake-related incidents by percentage
  • 79% of schools took disciplinary actions against the involved individual(s).
  • 66% engaged law enforcement to investigate.
  • 47% educated or trained the school community, including students and staff, about these types of incidents, how to recognize them, and appropriate use of generative AI tools.
  • 23% updated school or districtwide policy about technology misuse with specific clauses about AI.

NOTE: This figure depicts response data from the following survey question administered to school principals using the RAND American School Leader Panel in October 2024: “How did your school or district respond to these types of incidents?” (n = 120). Respondents were instructed to select all that applied. Only principals in schools that reported that they had experienced deepfake-related incidents in the 2023–2024 and/or 2024–2025 school years saw this question.

Other work suggests that there is still a significant gap when it comes to training on this emerging safety challenge. In the same EdWeek survey cited earlier, more than two-thirds of school staff reported receiving no training on deepfakes or rated the training they received as poor or mediocre.⁠[7] This lack of preparedness highlights the need for schools to invest in comprehensive training programs and resources to address the challenges posed by AI.

Preparing Schools to Respond to Artificially Intelligent Bullying

Although only about one in ten principals who took our survey indicated that they already had responded to cyberbullying involving deepfakes, rapid improvements in AI technology and its growing use mean that number could increase in the future. This technology directly undermines a key strategy that educators and others have pushed to address online victimization—specifically, not taking or sharing explicit images—and increases the need for schools to take a proactive approach to further strengthen protections.

The large percentage of our respondents who took disciplinary action in response to deepfake incidents or involved police shows that schools are taking this behavior seriously. Developing approaches to support victims of deepfakes—both students and staff—is needed to minimize harm after incidents occur.⁠[8] Identifying effective approaches to deter students from this behavior—whether through policy changes, communication with students about the consequences of making and sharing deepfakes, or other strategies—could pay significant dividends by addressing the issue before the damage is done.⁠[9]

Acknowledgments

We are extremely grateful to the educators who have agreed to participate in the panels. Their time and willingness to share their experiences are invaluable for this effort and for helping us understand how to better support their hard work in schools. We thank Lisa Wagner and Brian Kim for assisting with survey management; Gerald P. Hunter and Ruolin Lu for data management; and Tim R. Colvin, Roberto Guevara, and Julie Newell for programming the survey. Thanks also go to Claude Messan Setodji and Dorothy Seaman for producing the sampling and weighting for these analyses. We greatly appreciate the administrative support provided by Tina Petrossian and AEP management provided by David Grant. We also thank Meagan E. Cahill and Aaron C. Davenport for their review and feedback, which helped improve our work. Finally, we are grateful for Cindy Lyons’ efforts in overseeing the publication of this report.

Notes

  1. Pauline Moore, Brian A. Jackson, and Melissa Kay Diliberti, A New Agenda for School Safety Research: Insights from a RAND Roundtable Discussion, RAND Corporation, PE-A3811-1, April 2025, https://www.rand.org/pubs/perspectives/PEA3811-1.html. Return to content
  2. Kalie Walker, “AI ‘Deepfakes’: A Disturbing Trend in School Cyberbullying,” National Education Association, April 10, 2025, https://www.nea.org/nea-today/all-news-articles/ai-deepfakes-disturbing-trend-school-cyberbullying. Return to content
  3. Jeff R. Temple, Victor C. Strasburger, Harry Zimmerman, and Sheri Madigan, “Sexting in Youth: Cause for Concern?” Lancet Child and Adolescent Health, Vol. 3, No. 8, August 2019, https://www.thelancet.com/journals/lanchi/article/PIIS2352-4642(19)30199-3/abstract. Return to content
  4. Olina Banerji, “Why Schools Need to Wake Up to the Threat of AI ‘Deepfakes’ and Bullying,” Education Week, December 9, 2024, https://www.edweek.org/technology/why-schools-need-to-wake-up-to-the-threat-of-ai-deepfakes-and-bullying/2024/12. Return to content
  5. The RAND School Leader Panel is part of the American Educator Panels (AEP), which are nationally representative samples of teachers, school leaders, and district leaders across the country. The AEP are a proud member of the American Association for Public Opinion Research’s Transparency Initiative. Return to content
  6. Elizabeth Laird, Maddy Dwyer, and Kristin Woelfel, In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K–12 Schools, Center for Democracy and Technology, September 2024, https://cdt.org/insights/report-in-deep-trouble-surfacing-tech-powered-sexual-harassment-in-k-12-schools/. Return to content
  7. Olina Banerji, “Why Schools Need to Wake Up to the Threat of AI ‘Deepfakes’ and Bullying,” Education Week, December 9, 2024, https://www.edweek.org/technology/why-schools-need-to-wake-up-to-the-threat-of-ai-deepfakes-and-bullying/2024/12. Return to content
  8. Kara Arundel, “Schools Lack Supports for Victims of Sexually Explicit Deepfake and Real Images,” K–12 Dive, September 26, 2024, https://www.k12dive.com/news/schools-deepfake-images-student-supports/728107/. Return to content
  9. Elizabeth Laird, Maddy Dwyer, and Kristin Woelfel, In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K–12 Schools, Center for Democracy and Technology, September 2024, https://cdt.org/insights/report-in-deep-trouble-surfacing-tech-powered-sexual-harassment-in-k-12-schools/. Return to content

Topics

Document Details

Citation

Chicago Manual of Style

Jackson, Brian A., Melissa Kay Diliberti, and Pauline Moore, Artificially Intelligent Bullies: Dealing with Deepfakes in K–12 Schools. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/research_reports/RRA3930-5.html.
BibTeX RIS

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.