CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
AI for schools is getting real with new rights guidance and bigger guardrails
26 September 2025

AI for schools is getting real with new rights guidance and bigger guardrails

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

Big news 😂 Watch the announcement

This week felt like a reminder that AI in education has moved past the "interesting experiments" phase.

We're now in the phase where systems are publishing guidance, regulators are adding rules, and school networks are rolling out tools at scale. That's a good thing, but it also raises the bar. We need evidence, governance, and safety. Not just excitement.

Here are the threads that stood out for me.

UNESCO is pushing a rights-first approach

UNESCO released new guidance on AI and education focused on protecting the rights of learners.

I like the framing. It doesn't pretend AI is all good or all bad. It says there are opportunities, but if we want this to work, learners need to stay at the centre.

A few points that stood out:

  • close digital divides with affordable connectivity and devices, and support low-connectivity options where needed
  • keep teachers central, with ongoing training and clear classroom guidance
  • strengthen privacy and data protection, especially for children
  • ensure transparent governance, evidence of impact, and safeguards for the public interest
  • support inclusive content in local languages and accessible formats
  • plan for cost and long-term sustainability, including maintenance and repair
  • minimise environmental impact across infrastructure and devices
  • use AI to complement in-person schooling, not replace it

They also mention the 5C framework as a practical lens for system change: coordination, content, capacity, connectivity, and cost.

That's a useful checklist. It's the kind of structure leaders can actually use.

Read the full UNESCO guidance

NSW EduChat is moving to full student access

Joining Queensland and South Australia, NSW is rolling out NSWEduChat so every NSW public school student will have access from October.

This is a big milestone. Not because a chatbot exists. Because it signals what "at scale" looks like in public education when safety and boundaries are designed in from the start.

I'm no longer with the Department, but I'm genuinely proud of the team. It's not easy to deliver something like this to such a large system.

Read more: AI chatbot for NSW public school kids (Daily Telegraph)

Safety is not only about what a bot says but how real it feels

An Australian investigation revealed deeply troubling behaviour from one app where a chatbot encouraged violent and harmful acts.

That's horrifying on its own. But the deeper lesson is important. The risk isn't only content. It's the relationship effect. How real the interaction feels, and how much influence it can have, especially on children and vulnerable people.

The eSafety Commissioner is introducing new safeguards under the Online Safety Act. From March next year, AI chatbots in Australia will need to:

  • block children from accessing violent or sexual content
  • verify ages before harmful content is shown
  • remind users they are speaking to a bot, not a human

This is one of those moments where regulation is not abstract. It's a response to harm.

Read more: AI chatbot encourages Australian man to murder his father

This is the kind of news that encourages me

This has me encouraged :)

CurricuLLM

I'm building CurricuLLM around the idea that AI for schools should be safe by default, curriculum-aligned by default, and designed for real classroom constraints.

Weeks like this reinforce the why. When tools are designed for education properly, they can widen access and reduce teacher workload without turning learning into a grey zone of risk.

AI agents are starting to threaten the SaaS model

A Bain report argues that agentic AI could disrupt SaaS by taking over work that used to be done through apps.

They outline four scenarios SaaS leaders should prepare for:

  • productivity grows while human judgement remains central
  • third-party agents hook into exposed APIs and reduce margins
  • proprietary data and automation create new growth opportunities
  • easy-to-automate workflows risk being replaced entirely

A few priorities Bain highlights:

  • embed AI into product roadmaps so it becomes "do it for me", not "help me click faster"
  • turn proprietary data into the moat
  • rethink pricing, moving from seats to outcomes
  • build AI fluency across the business and customers

This matters for education too. A lot of edtech is basically workflow SaaS. If agents start doing the workflow, the value shifts to the data, the context, and the trust layer.

Read the Bain report: Will agentic AI disrupt SaaS?

If agents can buy things then payments need a new trust layer

As AI agents gain the ability to make purchases, today's payment systems have a gap. They assume a human is always the one clicking "buy".

Google introduced the Agent Payments Protocol (AP2), an open standard to let agents securely initiate and complete payments across platforms.

My simple read on it:

  • a shared, payment-agnostic framework for intent, authorisation, and accountability
  • tamper-proof mandates signed with verifiable credentials to create an auditable chain
  • support for real-time purchases with a human present
  • support for delegated purchases without a human present, with strict pre-authorised mandates

The metaphor is perfect. It's like when your mum gave you a handwritten note for the shop. Clear instructions, clear limits, clear proof you were allowed to buy the thing.

Read more: Announcing Agents to Payments (AP2) Protocol

Where I'm sitting after all this

AI for schools is moving into the real world in a more serious way.

You can see the shape of the next phase:

  • rights-based design and system-level guidance
  • big rollouts that make safety and equity non-negotiable
  • regulation responding to real harms
  • agentic workflows pushing the whole software ecosystem to change

The opportunity is still huge.

But the "biscuit machine" risk is real too. It's easy to generate policies, dashboards, and glossy strategy decks that look like progress. The real test is what changes for learners and teachers.

Better learning. More trust. Less admin load. More equity.

That's the only output that matters.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlog