It feels like AI leaders are starting to speak more plainly about the scale of decisions ahead.
Not "AI will change everything" in the usual hype way, but more like "we are approaching a period where we may hand real autonomy to systems we don't fully understand". That's a different tone, and it matters.
This week I read a mix of pieces that all connect to the same question.
How do we keep moving fast, while staying grounded, safe, and useful for everyday people.
And if you work in education, that question lands even harder, because schools are where society stress-tests new technology at scale.
2027 to 2030 is being framed as a turning point for autonomy
The Guardian profiled Anthropic's chief scientist Jared Kaplan, who described the period between 2027 and 2030 as a turning point for how much autonomy AI systems may be given.
He noted alignment progress so far has been strong, but allowing systems to train the next generation of models introduces major uncertainty.
The interview touched on the upside we all want:
- faster scientific research
- improved health outcomes
- stronger cybersecurity
- higher productivity
And it also stayed on the risks:
- control, safety, and security
- misuse and concentration of power
- the need for governments and the public to have time and information to understand what's coming
- the competitive environment and investment pressure frontier companies operate under
Anthropic's stance on regulation, safety, and transparency featured prominently, alongside the broader debate about how progress can continue while society stays informed and protected.
Read more: Jared Kaplan on artificial intelligence train itself
Australia's National AI Plan is a bet on existing laws, not a new AI Act
Australia has released its National AI Plan, outlining how the country will manage and advance AI using existing laws and regulatory frameworks.
It signals a shift away from earlier proposals for mandatory guardrails and a standalone AI Act. Instead, the focus is technology-neutral laws plus stronger oversight as the tech evolves.
Some key points:
- using existing laws to guide AI development and deployment
- establishing a $30m AI Safety Institute to monitor risks and advise government and industry
- increasing investment in data centres, supported by renewable energy planning
- responding to demand for AI skilled workers
- reviewing how current rules apply to consumer protection, copyright, healthcare, and workplace relations
- identifying gaps where stronger action may be needed later
It reads as an attempt to support innovation while maintaining safety, accountability, and public trust.
Which is basically the tightrope every country is walking right now.
Read more: National artificial intelligence plan - growth with existing laws
Calm technology might be the next big design fight
Jony Ive and Sam Altman have shared the clearest view so far of IO, their new company aiming to combine hardware, software, and AI into a single integrated experience.
What stood out is that they didn't start with a product. They started with months of exploring how people relate to each other, nature, tools, and intelligence, then later asked what a new product category could look like.
Some themes they raised about the upcoming devices:
- AI-native systems designed to understand, remember, and act over long periods
- "calm technology", closer to a quiet cabin than a wall of notifications
- proactive and context-aware, deciding when to surface information or stay out of the way
- extreme simplicity in physical form, warmth, and playfulness
- joy and delight treated as core design requirements
Altman suggested people may see the first IO devices within the next couple of years.
If that happens, it's another shift schools will feel quickly, because the boundary between "device" and "assistant" gets much thinner.
Watch: Jony Ive and Sam Altman on IO
Hardware is moving, and that changes the economics underneath everything
A New Scientist piece suggests Google's TPU developments are creating real movement in the AI hardware space, with companies like Meta and Anthropic reportedly exploring large investments in Google's specialised chips.
A few points that stood out:
- GPUs have powered most of the industry so far because they handle parallel computation well
- TPUs focus heavily on matrix multiplication, which is central to training and running large models
- Google's seventh-generation TPU (Ironwood) is powering many internal systems
- TPUs can be extremely efficient for certain workloads, although specialisation can reduce flexibility when designs shift
- improved software support is making TPUs easier to use
- hyperscalers are building custom silicon partly to manage cost and supply pressure
This matters because cost and availability of compute shapes what becomes normal, what becomes affordable, and what ends up embedded into everyday tools.
Read more: Why Google's custom AI chips are shaking up the tech industry
Australian classrooms are already using AI, but support and clarity still lag
Teacher Magazine published a useful piece drawing on TALIS 2024 findings about AI becoming a regular part of Australian classrooms.
It notes many teachers are using AI to:
- summarise content
- plan lessons
- adjust material to student needs
And fewer are relying on it for marking or analysing learning data.
Most teachers see benefits in personalising learning, while concerns remain around accuracy, bias, and student misuse.
For teachers not using AI, the blockers are familiar:
- lack of skills or knowledge
- uncertainty about the role AI should play
- limits set by policy and infrastructure
The article also points to practical moves forward:
- stronger focus on critical thinking
- clearer expectations for students
- more opportunities for human connection in the classroom
- using AI to help surface misconceptions and improve understanding
This is the reality layer I keep coming back to. Not grand predictions, but what's actually happening in staffrooms, classrooms, and policy documents.
Read more: AI in the classroom - evidence, teacher insights and action
Less sensationalism would be a genuine upgrade to the whole conversation
It was also refreshing to see coverage of Toby Walsh pushing back on the growing wave of AI doomer stories.
The piece frames his upcoming book as an attempt to bring the discussion back to evidence and practical concerns, especially against a media environment where extreme ideas can dominate attention.
Some themes worth keeping front and centre:
- moving from dramatic predictions to grounded analysis
- paying closer attention to current AI issues rather than distant speculation
- distinguishing real risks from amplified fear
- supporting clear, consistent communication based on evidence
This feels important because public trust is fragile. And trust is the foundation for any long-term progress, especially in education.
Read more: The dire AI warning behind the book everyone in the media is reading
The thread I'm pulling on from all of this
The common thread is that the next phase of AI won't be decided by one breakthrough.
It'll be decided by a bunch of choices happening in parallel:
- how much autonomy systems are given
- how governments regulate without freezing innovation
- what infrastructure shapes cost and access
- what product design feels like "calm" instead of "chaos"
- how teachers are supported in the everyday reality of classrooms
- how the public conversation stays grounded enough to be useful
If we get those choices right, the upside is real.
If we get them wrong, we'll still get change, just not the kind we wanted.

