CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
AI for schools will create better learning or better theatre and it depends what we choose to reward
3 October 2025

AI for schools will create better learning or better theatre and it depends what we choose to reward

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

When I was a child, I loved watching Bagpuss.

One episode has stayed with me for years: the "chocolate biscuit machine".

The mice proudly show off a marvellous contraption that turns breadcrumbs and butter beans into chocolate biscuits.

Except it doesn't. Behind the scenes, they are recycling the same biscuit over and over again. The whole thing is theatre. A show to convince the others that real production is happening.

Funny as a kid. Uncomfortably familiar as an adult.

Because workplaces (including schools) can fall into the same trap. New programs, processes, or technologies can create the appearance of progress. Reports are written. Dashboards get filled. Meetings get held. But sometimes it's performance for others to see, rather than genuine improvement.

AI has the power to make this theatre more convincing than ever.

It can generate dashboards, reports, insights, even whole strategies at the press of a button. But if the system is built on breadcrumbs and butter beans, the output will still be recycled biscuits.

The challenge is to look past the performance and ask what is actually being created.

Is the work producing value, or just the illusion of progress?

Bagpuss clip here: Watch: The chocolate biscuit machine

Work slop is just a new flavour of an old problem

The Daily Brief podcast explored the rise of "AI work slop".

Content that looks polished but lacks real value. Slick slides. Long reports. Tidy summaries. Code without context.

The issue isn't really the AI. It's another form of work-theatre, something most of us have seen before. Performance measured by visible activity instead of meaningful outcomes.

The suggestions they shared are the ones I keep coming back to:

  • shift incentives from inputs to outputs
  • cut unnecessary busywork
  • model what good work looks like
  • give people time and support to learn AI tools
  • build a culture of editing and iteration

AI can amplify both noise and value. If we don't change incentives, we'll just manufacture more convincing biscuits.

Podcast link: Watch: AI Daily Brief on work slop

A quick detour into empathy tech and why I'm cautious

There's an interesting Conversation piece on how VR and AI could support empathy and social-emotional learning in children.

The research highlights:

  • VR can create immersive scenarios where children interact with emotionally expressive characters
  • AI can adapt the experience in real time, adjusting intensity based on a child's responses
  • together, they can provide "safe" opportunities to practise empathy and emotional regulation

One prototype asked children to comfort characters and see the world through their eyes rather than chase points or badges. Early findings suggested children responded in ways that mirrored real-world empathy patterns.

It's interesting.

But if I'm honest, I'd prefer if this wasn't the main solution.

Unstructured play and real-world connection still feel like the better starting point. Tech might help in specific contexts, but we should be careful not to replace the messy, human practice of learning social skills with a simulation because it's easier to scale.

Link: Read more: How VR and AI could help the next generation grow kinder

Visual literacy is now AI literacy too

Generative AI is reshaping how we think about images and visual truth.

Early photography was treated like a mirror of reality. Now we have synthetic images that can look photorealistic while being entirely fabricated.

The Journal of Visual Literacy published research that frames AI image-making as co-production between human input and machine generation. That shifts what "visual literacy" needs to include.

Some of the new literacies they call out:

  • understanding how prompts, data, and interfaces shape outputs
  • recognising default formats, styles, and system biases
  • developing critical awareness of stereotypes and clichés in AI-generated imagery
  • navigating editing tools and technical constraints to refine outputs
  • connecting visual skills with broader media and AI literacy

This matters for schools because we teach kids to read texts critically, but we haven't yet caught up to the fact that images now need the same treatment.

Link: Read the paper: Visual literacy and AI image generation

Software that builds itself is exciting and also a governance problem

Anthropic is experimenting with "Imagine with Claude", where instead of writing code that describes an interface, Claude directly constructs the interface itself.

The example shown is playful, but the method hints at something bigger:

  • software generated on the fly, not planned in advance
  • each interaction creates new UI instantly
  • the system adapts based on context rather than a fixed script

This is one of those moments where you can feel the future. Tools that appear when you need them, shaped exactly to your task.

But it also raises a governance question for organisations and schools.

If software becomes ephemeral and personalised, how do we ensure:

  • safety
  • privacy
  • auditability
  • consistent access
  • equity
  • quality control

The "machine that makes tools" can create enormous leverage. It can also create a new kind of invisible complexity that we'll struggle to manage if we don't build guardrails early.

Link: Watch: Imagine with Claude demo

Tutoring is already an equity issue and AI tutoring will make it bigger

Tutoring is now a billion-dollar industry in Australia. More than one in seven students receive extra lessons outside school. Yet tutoring remains largely unregulated.

A paper in The Australian Educational Researcher looked at whether tutoring should be treated as a policy problem and found:

  • major equity concerns, because access depends heavily on family income
  • mixed evidence on academic benefits despite marketing claims
  • low oversight and qualification requirements, creating safety risks
  • international regulation attempts often shift tutoring underground rather than reducing demand
  • Australia has no clear tutoring policy across federal, state, or non-government bodies

The authors argue tutoring is a complex, chronic policy issue that needs careful instruments, not one-size-fits-all rules. They also highlight the need for better data.

I agree. Safety matters. Efficacy matters too. But new rules might reduce supply without fixing the underlying equity gap.

And this is the question I can't shake.

If standards are applied to human tutors, shouldn't we extend the same approach to AI tutoring tools too?

Required safety checks and efficacy assessments for both would help align the sector with student wellbeing, learning impact, and fairness.

Paper link: Read the paper on tutoring as a policy problem

Singapore shows where AI tutoring is heading

Channel News Asia covered how AI tutors are gaining traction in Singapore as a cheaper, more accessible alternative to traditional tuition.

Households spent S$1.8 billion on tuition last year, so the economic pressure is obvious.

What sets the AI tutoring platforms apart is curriculum alignment:

  • Tutorly covers most subjects from primary to junior college
  • WizzTutor focuses on maths from Primary 5 to Secondary 5
  • Geniebook blends AI services with physical branches

Local tailoring is the differentiator. It fits MOE requirements and exam standards, instead of being a generic global chatbot.

Experts still caution that design matters:

  • AI that provides direct answers risks shortcut thinking
  • teachers and parents remain essential in guiding use
  • hybrid models blending AI with human support are seen as the most sustainable path

This is where "AI for schools" is going to get real.

If tutoring becomes partly automated, then access could widen. But the design choices will determine whether it strengthens thinking or replaces it.

Link: Read more: AI tutors in Singapore's tuition industry

Where I'm landing this week

Bagpuss is the perfect metaphor for the moment we're entering.

AI can help us create genuine value in education. Better feedback. More practice. More support. Better differentiation. More time for human teaching.

But it can also create more convincing theatre. More documents. More dashboards. More "outputs" that look good while the actual learning stays the same.

So for AI for schools, I think the question to keep asking is:

What are we actually producing.

  • deeper thinking, or faster answers
  • better relationships, or simulated substitutes
  • real learning gains, or better-looking reports
  • equity, or a new divide between those who can pay for extra support and those who can't

If we reward outcomes and build evidence, AI can lift the floor.

If we reward performance and paper trails, we'll just keep recycling the same biscuit.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlog