CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
AI for schools in 2026 might be less about hype and more about friction, trust, and boundaries
25 November 2025

AI for schools in 2026 might be less about hype and more about friction, trust, and boundaries

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

Peak hype? Maybe.

Macquarie Dictionary just announced "AI slop" as its word of the year. It's funny, but it also captures something real. A lot of the public conversation has shifted from wonder to fatigue. People are seeing more low-effort content, more synthetic noise, and more "AI everywhere" experiences that don't actually make life better.

For AI for schools, this matters. Because schools don't get to opt out of the wider culture. Students and teachers are swimming in the same information environment as everyone else. If the outside world is getting noisier, education needs to get clearer.

This week's links all orbit the same idea for me: the next phase won't be won by the fanciest model. It'll be won by the organisations (and schools) that design for trust, build useful boundaries, and keep enough friction in the right places.

"AI slop" is a sign the vibe has changed

When a dictionary picks "AI slop" as word of the year, it's not just a meme. It's a cultural signal.

It says people are noticing the downside of easy generation. The flood of content. The weird sameness. The low-cost output that feels like it was made to fill space, not to help anyone.

In education terms, it's a reminder that students need stronger media literacy than ever. Not just "spot misinformation", but "spot low-quality". Spot persuasion. Spot emptiness disguised as confidence.

Macquarie link here: Macquarie Dictionary Word of the Year for 2025

Image models getting better at text is cool, and also a new risk surface

If you've been playing with Nano Banana Pro (like I have), you've probably noticed how quickly image models have improved. One of the biggest changes is readable text.

Early diffusion models produced blurred or incorrect lettering. Newer approaches are getting much better by linking strong language models with redesigned diffusion processes, which means clear and accurate text inside generated images.

That's genuinely impressive.

It also changes the education problem. When images can contain perfect-looking text, a student can generate posters, "evidence", fake notices, fake screenshots, and polished artefacts faster than most teachers can respond. This is not a reason to panic. It's just a reason to update what we teach and how we assess authenticity.

The best enterprise AI wins look boring on the surface and that's a compliment

Chemist Warehouse expanding an AI-driven shared inbox tool is a great example of practical deployment.

It started in HR. The tool drafts responses to common questions from more than 20,000 staff. HR advisors review the drafts before sending them. That human review step is the point. It keeps accuracy and judgement in place, while still removing a lot of repetitive work.

A few details I liked:

  • HR acted as an incubator, shaping a pattern other functions can reuse
  • the tool draws on a growing knowledge bank created by documenting internal processes
  • the team invested real effort into defining what the AI should avoid responding to
  • staff report having more time for coaching, support, investigations, and compliance
  • the approach becomes a reusable shared inbox automation framework

This is the pattern I trust most. Document the tribal knowledge. Design the boundaries. Keep humans responsible. Let AI do the drafting and triage, not the final call.

Schools can learn from this. AI for schools doesn't have to start in classrooms. Some of the best wins will be in the "shared inbox" parts of schooling: parent queries, enrolment questions, policy lookups, support tickets, and the everyday admin that steals attention from learning.

Read more: Chemist Warehouse's AI tool for HR becoming a standard pattern

The real issue in education isn't AI, it's care and trust

My colleague Danny Liu shared a thoughtful piece about recent headlines on AI in education. Reports from The Guardian, The New York Times, and New York Magazine show growing tension between how students and educators use generative AI.

Danny's framing is the one I keep returning to. The issue isn't AI itself. It's how AI use can signal a lack of care, effort, or trust. Both students and teachers want to feel that learning matters and that work matters, but current systems often make that hard.

Some key points he raises:

  • frustration often comes from workload, pressure, and unclear expectations
  • mistrust grows when AI use feels uneven or hidden
  • responsible use of AI can reduce policing and support agency

This is why policy-only approaches struggle. If a school responds to AI with surveillance and punishment, it usually creates more hiding, not better learning. If the school responds with clarity, shared expectations, and support, you get more honesty and better outcomes.

Regulation is wobbling between speed and safety

The European Commission has proposed changes to simplify its approach to AI regulation while keeping core protections in place, and it's triggered big debate.

The proposals include:

  • delaying parts of the rules covering high-risk AI systems (including education) to December 2027
  • clearer guidance on what counts as anonymous data
  • permission to use large datasets that may contain sensitive info, as long as reasonable steps are taken to remove it
  • reduced documentation requirements for small and medium businesses

Supporters see lower compliance cost. Critics see weakened rights and more citizen data becoming available to big tech. Whatever your view, it shows how hard it is to regulate fast-moving tech without either choking innovation or leaving people exposed.

And education sits right in the middle of this. Schools want tools. Parents want safety. Governments want economic growth. Everyone wants trust.

Read more: EU AI - big tech proposals

The Penn study reinforces something we all intuitively know about learning

A University of Pennsylvania study covered by The Conversation looked at how learning with AI shapes depth of understanding.

Across seven studies with more than 10,000 participants, researchers compared learning through large language models with learning through standard web search. The consistent finding was that relying on an AI summary often led to shallower knowledge, even when the available information was the same.

Some patterns:

  • participants felt they learned less when relying on an AI summary
  • they produced shorter and more generic explanations
  • independent readers found these explanations less useful
  • actively navigating links and sources supported deeper understanding

The researchers suggest the key difference is friction. Clicking, reading varied sources, and forming your own synthesis builds stronger mental models. Receiving a ready-made summary reduces the need for that active process.

This is one of the most useful ideas for AI for schools right now.

We don't need to ban AI summaries. We need to design learning so students still do the thinking. And we might need to build "helpful friction" into tools and tasks so engagement stays real.

Read more: Learning with AI falls short compared to old-fashioned web search

The thread tying all this together

To me, "AI slop" is the warning label for the next phase.

The challenge isn't access to generation. Everyone has that now.

The challenge is quality, depth, and trust.

  • Where do we want friction, because friction creates learning
  • Where do we want smoothness, because smoothness removes pointless admin
  • What boundaries make AI safe and useful
  • What expectations make AI use honest rather than hidden
  • What does "care" look like in an AI rich classroom

If we get those design choices right, AI for schools can be genuinely empowering.

If we get them wrong, we'll get a lot of output, a lot of noise, and not much learning.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlog