A Growing Concern Across U.S. and U.K. Schools

A newly released study reveals that 41% of schools in the United States and the United Kingdom have experienced AI‑related cyber incidents, highlighting a rising challenge for educational institutions adapting to artificial intelligence in the classroom. These incidents range from phishing attacks to harmful student‑generated AI content such as deepfakes.

The survey, conducted by TrendCandy for Keeper Security, compiled responses from 1,460 education administrators. It found that 11% of incidents caused disruption, while an additional 30% were contained quickly. Despite this, the data suggests that many schools may lack the visibility needed to fully understand the scope of AI‑driven risks.

“The same tools that help a student brainstorm an essay can also be misused to create a convincing phishing message or even a deepfake of a classmate.”

— Anne Cutler, Keeper Security Cybersecurity Evangelist

Why AI Threats Are Difficult to Manage

According to the study, 82% of schools believe they are at least “somewhat prepared” for AI‑related threats, but only 32% feel “very prepared”. Experts warn that this gap signals uncertainty around the effectiveness of current safeguards. Many education leaders worry that they cannot reliably distinguish between legitimate AI use and activity that introduces risk.

David Bader, director of the Institute for Data Science at NJIT, notes that schools have historically been under‑resourced in cybersecurity. With AI tools spreading rapidly and often without institutional oversight, the attack surface has expanded dramatically. Bader adds that the real number of AI‑related cyber incidents is likely higher than reported, because many events go undetected entirely.

Examples of AI‑Related Cyber Threats in Schools

  • Phishing Messages: AI models generate convincing emails that lure staff or students into sharing sensitive information.
  • Deepfake Content: Students using AI to create harmful or misleading videos of peers.
  • Automated Attack Support: AI tools scanning school networks for weaknesses at high speed.

Schools Must Strengthen Their AI Safeguards

Experts agree that the rapid expansion of AI tools in education requires a parallel investment in cybersecurity. As Bader explains, schools are often hit with attacks before they have the opportunity to build protective frameworks. Without more robust monitoring and clearer policy guidance, institutions may struggle to keep pace with evolving threats.

With AI now deeply embedded in educational practices, leaders must rethink digital safety as a core component of school operations. While awareness of AI risks is high, genuine preparedness remains uneven—and the stakes continue to rise.

Rate this post