The conversation about AI for schools is starting to split into two tracks that are both true.
One track is optimism. Governments and big tech are launching programs to "bring AI into classrooms" and make the country more competitive. The other track is caution. People are asking whether we're moving faster than our safeguards, especially for kids.
This week's headlines captured that tension pretty well.
The US is pushing hard on AI in schools
The White House launched a new initiative aimed at accelerating AI in US schools, with high-profile backing and promises from major tech companies.
If you've worked in education for a while, you know the hard part isn't the announcement. It's the rollout.
AI only becomes "real" when it lands inside teacher workload, assessment, procurement, policy, parent trust, and student behaviour. That's where things either stick or quietly fade out.
Link: Read more: Melania Trump and artificial intelligence in schools
Meanwhile, the security people are waving a big red flag
Trail of Bits published research showing a new prompt injection pathway for multimodal AI systems: instructions can be hidden in an image in a way that only becomes visible after the system scales it down. So the original looks harmless, but preprocessing can reveal the payload.
This matters for education because schools are moving toward more image-heavy workflows (photos of worksheets, screenshots, scans, whiteboards). If AI for schools becomes a core workflow, then the whole pipeline matters — not just "is the model safe."
Link: Read more: Weaponizing image scaling against production AI systems
"AI-enabled schooling" is a real vision, but it's not a neutral one
There's a growing genre of content that says the traditional classroom is outdated and AI can rebuild learning around personalised pathways. That vision might end up partly right, but it also changes the role of teachers, the shape of peer learning, and the commercial incentives inside schooling.
If you're watching this space, I think the main question is not "does it work in a demo?" but:
What happens when it hits real schools, with real constraints, and real kids?
Link: Watch: AI-enabled schooling vision
Policy and literacy are the boring parts that decide everything
The Tony Blair Institute paper is one example of a system-level response: treat AI literacy like a foundation, not an optional extra, and back it with training, devices, and consistency across schools.
I'm increasingly convinced AI for schools will be won or lost in the "boring layer":
- teacher capability and confidence
- clear expectations for students
- consistent policy and procurement
- transparent evidence of impact
- privacy and governance that parents can understand
Link: Read: Generation Ready - building the foundations for AI proficient education in England's schools
Big tech in the classroom is not automatically good or bad
There's also a fair critique emerging that a lot of edtech is shaped by commercial incentives more than evidence, and that schools can end up buying "engagement mechanics" rather than learning gains. That critique is getting sharper as AI tools become more embedded.
Link: Read: Big tech in the classroom
My take
AI in schools will keep accelerating. But announcements and pilots don't matter much on their own.
What matters is whether systems can do four things at once:
Protect learning (so AI supports thinking instead of replacing it)
Protect kids (safety, privacy, age-appropriate design)
Protect teachers (practical workload relief, not extra admin)
Prove impact (evidence that stands up beyond marketing)
If we get those right, AI for schools becomes one of the biggest equity and capability levers education has had in decades.
If we get them wrong, we'll get very polished progress theatre — and a lot of regret.

