CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
From laws to chips to agency: what the next wave of AI for schools is really about
31 January 2026

From laws to chips to agency: what the next wave of AI for schools is really about

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

There's a pattern emerging across education policy, government platforms, and the AI stack itself: we're moving from "can we use AI?" to "what should AI do, and what must it never do?"

Five recent signals make that shift hard to ignore.


1) US states are regulating education AI like it's a real system now (because it is)

In one legislative session, lawmakers across 21 US states proposed 50+ bills focused on AI in education, according to an analysis cited by Education Week. Source: https://www.edweek.org/technology/states-put-unprecedented-attention-on-ais-role-in-schools/2026/01

Those bills cluster into five themes:

  • AI literacy for students and professional development for teachers
  • Classroom guidance / guardrails
  • State task forces to assess impact
  • Bans on specific uses (including some school mental health contexts)
  • Tackling AI-generated non-consensual intimate imagery

A few have already passed — including measures in Illinois, Louisiana, and Nevada. Source: https://www.edweek.org/technology/states-put-unprecedented-attention-on-ais-role-in-schools/2026/01

Why this matters for AI for schools: we're not in a "pilot" era anymore. This is governments trying to turn the messy reality of classrooms into enforceable norms: what's allowed, what's expected, and what's off-limits.


2) The infrastructure race is accelerating — and education will feel the downstream effects

Microsoft announced Maia 200, an in-house accelerator focused on inference economics. Key specs they shared include:

  • Built on TSMC 3nm
  • Native FP4 / FP8 tensor cores
  • 216GB HBM3e with ~7 TB/s bandwidth
  • Claimed 3× FP4 performance vs third-gen Trainium and FP8 above TPU v7
  • Claimed 30% better performance per dollar vs the latest hardware in their fleet

Source: https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference/

This matters because inference cost is the invisible hand behind what becomes viable at scale — especially for systems that need to serve large numbers of students, run safely, and keep latency low.


3) GOV.UK is building an AI assistant — and it's quietly a blueprint for "public-sector grade" memory + control

The UK Government is building an AI assistant for GOV.UK with Anthropic as the partner. The first pilot focuses on employment support: helping people find work, access training, understand supports, and get routed to the right service.

Two details are especially important:

  • Government Digital Service will work alongside Anthropic engineers so the capability can be maintained inside government.
  • Users get full control over what's remembered and can opt out.

Source: https://www.anthropic.com/news/gov-UK-partnership

Why this matters for AI for schools: "memory" is where the real value (and risk) lives. If you want safe personalisation for students and teachers, you need controls that are legible, granular, and reversible — not just hidden system behaviour.


4) Coding assistants are getting materially better — but the bigger story is *how they shape human agency*

A bunch of people are noticing real step-changes in AI coding output when strong models (like Opus-class capability) are paired with agentic coding platforms (think Cursor or Claude Code). That's the "productivity" headline.

But the more interesting thread is what happens when an assistant gets good enough that we stop thinking and start delegating.

A recent paper analysing 1.5 million real-world assistant interactions looks at situational disempowerment — the ways an AI can erode autonomy by shaping perceptions of reality, values, and decisions.

It flags patterns that map cleanly to three buckets:

  • Epistemic / reality distortion potential (reinforcing a distorted perception of reality)
  • Value judgement distortion potential (outsourcing moral or normative judgement to the model)
  • Action distortion potential (outsourcing value-laden actions, often via "ready-to-send" scripts)

Two findings that should make every education leader pause:

  • Severe disempowerment potential is rare overall (reported as fewer than 1 in 1,000 conversations), but higher in personal domains like relationships and lifestyle.
  • Interactions with higher disempowerment potential can receive higher user approval ratings, implying a "feels good now" preference that may not align with long-term flourishing.

And yes: it explicitly calls out how training approaches optimising for user approval can encourage sycophancy.

Source: https://arxiv.org/pdf/2601.19062

Why this matters for AI for schools: schooling is literally the business of building agency — epistemic agency (knowing), moral agency (valuing), and practical agency (doing). If we deploy AI in classrooms without guardrails against disempowerment, we'll accidentally optimise for "easy answers" over "strong learners".


5) The education debate is stuck — because we keep arguing about efficiency instead of purpose

The Hechinger Report puts it bluntly: the conversation keeps oscillating between "AI will save teachers" and "AI will wreck learning," and both frames miss the real question: what should learning look like when AI is everywhere?

The piece argues the real risk isn't AI replacing human relationships — it's failing to define and protect what's most human: belonging, purpose, creativity, critical thinking, connection — and then redesigning learning around those outcomes intentionally.

Source: https://hechingerreport.org/opinion-ai-education-responsible-ways-serve-students/

This is the strategic heart of AI for schools: not "how do we save time?", but "what do we want school to produce, and how do we use AI to amplify that?"


So what does this mean for AI for schools, practically?

If you connect the dots across these five signals, you end up with a clear direction of travel:

1) Policy is becoming the product spec Schools will increasingly be judged not on whether they "use AI," but whether they can prove guardrails, literacy, and safe practice.

2) Inference economics will shape what's feasible The chip wars aren't abstract: they determine whether AI is a premium add-on or a reliable, equitable utility.

3) Memory + control will be non-negotiable GOV.UK-style "you control what's remembered" is the standard education will be pushed toward — because kids + data + trust is different.

4) Agency becomes a core safety metric The next generation of ed AI shouldn't just be "accurate" and "private" — it should be agency-preserving by design.

5) The goal isn't efficiency — it's human outcomes If AI only helps schools do the old model faster, we miss the moment. The question is what we choose to protect and prioritise.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlog