Two stories landed for me this week and they sit in tension with each other.
One is South Australia's EdChat Insights Report. It's a rare, practical window into what happens when AI is used in real schools, by real students and teachers, over a meaningful period of time.
The other is South Korea's AI textbook experiment, which was rolled back after just four months. Same broad ambition. Very different outcome.
If you care about AI for schools, the lesson is pretty clear. The technology matters, but the rollout matters more. Trust, pace, design, and support are the difference between a tool that helps and a program that gets rejected.
South Australia's EdChat report is the kind of evidence we need more of
Somehow I missed this earlier, so thanks to Ray Fleming and Dan Bowen for flagging it on the AI in Education Podcast.
The SA EdChat Insights Report (July 2023 to June 2024) is packed with useful signals, not just vibes.
A few standout insights:
- 4.5k active school-based users and 450k prompts, with uptake growing more than sixfold in the first half of 2024
- 4k active students, with 94% of interactions linked to curriculum content, and more than half engaging across multiple subjects
- 500 active educators, with 95% of their use connected to administration and curriculum management
- System-wide adoption hitting 40% of eligible users, with balanced participation between students and educators
- Usage that mirrors real school rhythms, peaking at the end of terms and continuing into evenings
- Early evidence of deeper learning, with over a third of prompts reflecting higher-order thinking, and students in a focused study demonstrating complex reasoning at least once
That combination is what I like. You see where the value actually shows up. Students using it for curriculum learning. Teachers using it heavily for the admin and planning load. And you start to see signals about thinking, not just "time saved".
Report link: South Australia EdChat Insights Report
South Korea shows what happens when speed beats trust
Then there's South Korea.
Their AI-powered textbook program was meant to personalise learning, reduce inequality, and ease teacher workload. Instead, it ran into the classic failure modes:
- rushed rollout with limited testing
- not enough educator input
- complaints about errors and privacy risks
- workload increasing rather than dropping
- adoption dropping from 37% to 19% in one semester
- the textbooks downgraded from "official" to "supplementary"
Politics didn't help either, with a change of government adding pressure to an already tense deployment.
This is the part people underestimate. Embedding AI in classrooms at scale is hard. Teachers need time, evidence, and confidence before it becomes normal practice. If any one of those is missing, the whole program becomes brittle.
Link: Read more: South Korea AI textbook rollback
A quick detour into enterprise agents because the patterns are the same
I listened to an AI Daily Brief episode that summarised findings from more than 1,000 executives on AI agent adoption.
Different sector, but the same story.
Success comes from choosing focused, high-impact use cases, having a plan for data quality, and investing in people change.
Key takeaways that map surprisingly well to school systems too:
- quick wins often come from internal support bots and knowledge search
- documentation gaps are common and slow everything down
- "sandbox with guardrails" governance shows up in mature programs
- executive sponsorship and clear champions matter
- employee bandwidth is often the real constraint, not budget
It suggested 2026 is the year of context and ROI.
In schools, that translates to something like this: stop chasing shiny pilots. Pick a few use cases that teachers actually want. Build the foundations. Then scale what works.
Link: Watch: AI Daily Brief on enterprise agents
Why I'm uneasy about AI detectors being used as judgement tools
I also read a strong paper on AI detectors in education.
The central argument is blunt. Detector outputs are probabilistic and not verifiable in real settings. Which creates due process problems if they're used to judge student work.
A few points that feel especially important:
- results can't be independently verified in real conditions
- "AI hallmarks" and stacking detectors often creates confirmation bias
- even low false positive rates don't tell you much without knowing the base rate
- the human vs AI binary doesn't reflect reality, work is often created with AI, not by AI
- submitting student work to third-party detectors can create privacy and security risks
- integrity processes require evidence, and detector scores don't meet that bar
This is one of those areas where schools need calm policy and strong principles. If we get this wrong, we'll create a culture of mistrust that poisons good AI use.
Paper link: Read the paper on AI detectors in education
A small personal update on access
Today I expanded free access to all teachers in WA, SA, TAS and ACT.
Please share it with your teacher networks if you know people who'd find it useful. And if you try it, tell me what feels good and what feels annoying. The feedback loop matters.
The future keeps arriving in parallel
And then, because the world is the world, Figure released Figure 03, its third-generation humanoid robot.
It's aiming at practical operation in everyday environments. Better sensing, tactile hands, safety, domestic-friendly design, and a manufacturing path through BotQ.
It's not directly a schools story, but it's a reminder that AI isn't just a screen. The boundary between digital systems and the physical world is thinning.
Link: Watch: Figure 03 humanoid robot announcement
Where I'm sitting after all this
The SA EdChat report is encouraging because it shows real adoption and real patterns of value.
South Korea is a warning because it shows how quickly trust collapses when rollout is rushed and workload rises.
And the detectors paper is a reminder that governance isn't optional. Especially when decisions affect students.
For AI for schools, the path forward feels pretty clear:
- move at the speed trust can handle
- invest in teacher capability, not just tools
- design for curriculum alignment and higher-order thinking, not shortcuts
- avoid integrity theatre and focus on evidence-based approaches
- scale what works, based on data, not hype
If we do that, AI can genuinely lift learning and reduce gaps.
If we don't, we'll still get change, just not the kind we want.

