Ones to watch

Where designed governance is unlocking the next wave of AI capability

Designed autonomy

When governance becomes how capability is built

As AI systems begin to act with greater independence, the question is no longer whether autonomy is desirable, but whether it has been designed with intent. Emerging agent frameworks — such as Helm, which replaces ad‑hoc tool use with typed, permissioned actions — are showing what happens when governance moves upstream, embedded directly into how systems operate rather than applied after the fact.

Explicit permissions, constrained execution and auditability do not reduce ambition. They make it possible. Autonomy still scales, but within boundaries that leaders can explain, defend and trust.

This is governance not as constraint, but as the condition that allows autonomy to exist responsibly.

Capability that lives where trust already exists

AI that acts within context, not abstraction

We are seeing early signals of AI capability moving closer to the environments it serves. Developments like on‑device function calling in Google’s AI Edge Gallery, powered by compact models such as FunctionGemma, allow intent to translate directly into action — entirely offline and within local system boundaries.

This is a subtle shift, but an important one. Capability is no longer centralised by default. It is embedded where responsibility already sits — within a device, a role, a moment of work. Data does not travel further than it needs to. Behaviour is constrained by context rather than policy alone.

Trust, in these systems, is not negotiated after deployment. It is designed into the boundary.

Reliability as an outcome in its own right

When performance stops being enough

As AI moves from experimentation into essential operations, a different capability comes into focus: reliability. Emerging infrastructure approaches — such as DualPath inference architectures, which redesign how models handle memory and throughput — treat consistency under load as something to be engineered, not assumed.

This work rarely attracts attention. But without it, none of the more visible gains endure. Extraordinary outcomes do not come from systems that impress in isolation. They come from systems that behave predictably when they are depended upon.

Reliability is not a technical detail. It is what allows capability to last.

The next phase of AI progress belongs to organisations that treat governance not as constraint, but as the design discipline that makes ambition possible.

Tech Radar

Across this edition, we see AI progress consolidating around fewer, clearer directions

The shift is not toward more intelligence, but toward more deliberate design such as embedding governance, trust and reliability into the capabilities organisations choose to scale.

As experimentation gives way to intent, the differentiator is no longer what AI can do, but how purposefully it is built to behave.

“AI capability only has value when it’s built on governance that leaders can stand over. The organisations moving fastest are the ones designing transparency, control and reliability into their systems from the start. When you scale governance, you scale trust and that’s what turns ambition into outcomes”.

Research by: Rosemary J Thomas, PhD

Senior Technical Researcher, Version 1

Learn more

Letters of importance

Previous page

About Version 1

Next page