AI for schools has mostly been framed as a productivity story. Faster lesson planning, quicker feedback, better differentiation, less admin. That's all real. But this week I found myself thinking about something else.
The way AI behaves, the way it speaks, and the way people relate to it is quickly becoming a safety issue. Not in a sci-fi sense. In a very normal, everyday sense. Kids forming attachments. Parents using bots to reduce conflict at home. Exams changing because cheating has levelled up. Even public pools using AI to spot danger faster than humans can.
This post is a collection of threads that all point to the same thing. We're moving from "should we use AI" to "how do we use it without breaking the human bits that matter".
When a chatbot sounds like a best friend, that changes the risk profile
Researchers tested two types of chatbots with 284 teenagers and their parents.
- One used friendly language like "I am here for you".
- The other was transparent and kept reminding users it was just a machine.
Most teenagers (67%) preferred the friendly AI. They rated it as more trustworthy and easier to talk to. Parents were less keen, but the friendly bot still won overall (54%). Teenagers also felt closer to the friendly bot, scoring it 3.6 for emotional closeness compared to 2.8 for the transparent one. Interestingly, they rated both bots as about equally helpful.
The part that really matters for schools is who chose what.
The kids who were struggling the most were more likely to pick the friendly "best friend" bot. Those kids had lower family relationship scores (47 vs 52) and higher anxiety levels (51 vs 45) than those who picked the transparent bot.
That's the big signal. Conversational style isn't just "UX". It shapes attachment and trust. And the kids most at risk may be the most drawn to bots that feel human.
A few important caveats: the score gaps are statistically significant but small, we don't know whether this preference makes things better or worse long-term, and the study used imaginary scenarios rather than long-term real-world use. Still, the pattern is hard to ignore.
For AI for schools, it suggests a simple but powerful safety design principle. Make it clearer that the system is non-human, especially in contexts involving vulnerable students. That clarity may reduce emotional dependency without reducing usefulness.
https://arxiv.org/pdf/2512.15117
AI is showing up as a safety layer in the physical world too
Around 120 public pools across Australia are now using AI to help keep swimmers safe. The system watches swimmers using cameras and alerts a lifeguard's smartwatch if it detects someone struggling or underwater for too long.
It's reportedly already helped save lives in Perth and Sydney.
This is the kind of AI story that is easy to like because it's concrete. It doesn't pretend to be human. It doesn't try to "bond" with anyone. It just does pattern detection at speed, and it helps a human do their job faster. That's a useful frame for AI for schools too. The best uses often look like "human + tool", not "tool replacing the human".
https://www.abc.net.au/news/2025-12-31/ai-system-to-help-lifeguards-in-pools-swim-safety/106172140
Homework supervision is being outsourced to bots and the reason is emotional, not technical
Parents in China are using AI chatbots (including ByteDance's Dola) to supervise children's homework through phone cameras. These apps monitor posture, check focus, and provide automated tutoring, with user numbers reported at around 172 million monthly.
The practical driver is obvious: private tutoring is expensive, and working parents are stretched. But the detail that stuck with me was this: many families use the technology to reduce conflict, because the AI gives instructions in a calm tone that avoids the tension of parent-led study.
That's a very real signal about what families are experiencing at home. And it links directly back to AI for schools. If school systems don't provide guidance, routines, and shared language around AI, families will fill the gap themselves with whatever tools are easiest and most emotionally frictionless.
Assessment is changing because AI cheating is getting too good
The ACCA is scrapping remote exams starting this March, largely because AI-powered cheating has become too sophisticated for their current safeguards.
This matters for schools even if you're not running high-stakes credentialing. It's the same underlying trend. When the cost of "producing plausible work" collapses, assessment has to shift toward authenticity, process, and judgement.
For AI for schools, that usually means a mix of:
- more in-class performance and oral explanation
- more drafting and evidence of process
- more tasks that require local context, personal reflection, or real-world data
- clearer rules about when AI is allowed and what must be disclosed
None of this is about banning tools. It's about designing learning experiences where thinking still matters.
https://www.theguardian.com/business/2025/dec/29/uk-accounting-remote-exams-ai-cheating-acca
Software development is being refactored and education is next
That Andrej Karpathy thread about a "magnitude 9 earthquake" in software development has been sitting in my head. His point about the developer's "bits" becoming sparse rings true if you've been trying to ship anything lately. The code is one part of the puzzle. The container around it is the real grind: the tooling, tickets, process, environments, approvals, documentation, and endless translation between humans.
A few reasons the SDLC feels like it's glitching right now:
- Agile was designed for the slow manual process of writing code. When AI can generate features in minutes, a two-week sprint can feel like a lifetime.
- If you follow the SDLC faithfully, "process work" can become most of the work. Managing the work starts to outweigh doing the work.
- Requirements gathering becomes slow and unrewarding when systems can fill in blanks and iterate instantly.
- Strict requirements can squash better ideas, including optimisations the AI surfaces that the team didn't consider.
- We now have AI writing documentation so other AI can read it later, with humans as observers in a recursive loop.
My prediction for 2026 is that we'll see new "agentic SDLC" patterns that prioritise streams over sprints and more continuous sense-making over stage gates.
Why include this in a post about AI for schools? Because schools are also a system full of process, structure, gates, and rituals. AI will challenge the bottlenecks the same way. Not just "how do we teach with AI", but "which parts of schooling are actually the work, and which parts are the container around the work".
I still want the human parts to survive, even when AI can smooth everything out
I also caught myself thinking about travel, which seems unrelated until you zoom out.
I travel to disconnect from day-to-day life, but that doesn't mean shutting the world out. It's about connecting with people, cultures, and experiences in a way that feels real. Technology barely plays a part in my trips, and I hope it stays that way.
An ABC News piece talked about robots and AI "taking over" the travel industry, including hotels where you never have to talk to a human and itineraries that remove all friction. Skipping a long line sounds great, but the friction is often where the best stories happen. Getting a bit lost, a confusing interaction with a local, the tiny surprises. That's the point.
This is the same risk pattern schools face. If we use AI to smooth every wrinkle in learning, we might accidentally remove the struggle that creates growth. If AI for schools becomes a world where there is never any productive friction, we may get compliance without understanding.
AI can help with the boring logistical parts. I just don't want it to replace the human parts.
https://www.abc.net.au/news/2025-12-27/tourist-new-technology-ai-travel-smart-tourism/105956560
The point I keep coming back to
AI for schools is moving beyond "tools" and into "relationships".
- How AI talks changes how kids relate to it.
- Who bonds with it most may be the students who are already struggling.
- Safety isn't only about filtering content. It's also about design choices like tone, identity, and boundaries.
- Meanwhile, AI is proving itself in roles where it supports humans, not replaces them, like pool safety and certain forms of monitoring.
- And the systems around learning (assessment, workflows, home supervision) are being forced to adapt fast.
The opportunity is huge, but the job now is more than adoption. It's shaping. The way we design, deploy, and explain these tools will decide whether AI for schools strengthens learning and wellbeing, or quietly pulls them apart.

