An AI agent that understands government so you don't have to. One conversation, every department, nothing missed.
The Agentic Legibility Stack is two products that work together: a citizen app where an AI agent acts on your behalf across every government department, and a Legibility Studio where departments make their services ready for agents to understand and deliver.
This is not a technology proposal. It is an operating model for how citizens interact with government in an age of AI agents.
Government has over 1,500 services across dozens of departments. When something significant happens in your life, it is your job to find them, understand them, and deal with each one separately.
For agents to work on behalf of citizens, two things must be true: the citizen needs a trustworthy agent, and government services need to be understandable by that agent. These are two different products for two different users, designed to work together.
For citizens and the people who help them
For departments and service teams
A single app where an AI agent handles government on your behalf. You stay in control of every decision. The agent does the work.
Describe what is happening in your life — bereavement, having a baby, losing a job — and the agent identifies every service you need across every government department. No more searching GOV.UK.
The agent builds an ordered plan showing which services come first, which depend on others, and which you can skip. You see the full picture before anything happens.
Verified credentials from your GOV.UK Wallet — driving licence, National Insurance number, passport — pre-fill forms across departments. You enter your details once, not twelve times.
Before the agent shares any information or submits anything on your behalf, you see exactly what will be sent, to whom, and why. Nothing happens without your explicit approval.
For each service, choose: let the agent handle it entirely, or step through it yourself. Parents can act for children. Executors can act for the deceased. Carers can act for those they support.
After every submission, you get a receipt: what was sent, when, to which department, and the reference number. A complete, auditable record of everything your agent did.
The plan shows which services are ready to begin, which are locked behind prerequisites, and how long each will take. The citizen delegates as much or as little as they want.
A tool for authoring service artefacts, publishing them for agents, and monitoring how those agents deliver your services to citizens.
Define each service with four structured artefacts: a manifest (what the service is), a policy ruleset (who is eligible), a state model (the journey steps), and a consent model (what data is needed). Any agent can then understand and deliver your service.
See at a glance which of your department's services have complete artefacts, which are partially described, and which are not yet agent-ready. Prioritise the highest-impact services first.
Every agent interaction with your services is recorded: what data was accessed, what consent was given, what was submitted, and what the outcome was. Replay any journey for complaints or oversight.
When a citizen talks to the agent, here is what happens behind the scenes. The critical insight: the AI generates conversation, but deterministic code makes every decision.
The AI agent listens and identifies the relevant life event — bereavement, starting a business, having a baby. It matches this to services across every department using a graph of over 1,500 government services.
For each relevant service, the agent reads the structured artefacts published by that department: what the service does, who is eligible, what data is needed, and what consent is required. The agent does not guess — it reads what the department has published.
Eligibility is not decided by the AI. Policy rules published by the department are evaluated by deterministic code. If you meet the criteria, you are eligible. If not, you are not. The AI cannot override this.
Before any data is shared or any action is taken, the citizen sees exactly what will happen and gives explicit consent. Consent is enforced by code, not by the AI. The agent physically cannot proceed without it.
The agent submits the citizen's data to the department's service through a secure, audited gateway. Every field is traced: where it came from, whether it was verified or citizen-provided, and when consent was given.
The citizen receives a receipt confirming what was submitted, to which department, when, and a reference number. The department receives an auditable record of the interaction. Both sides have proof.
The most common concern about AI in government: what if it gets things wrong? The answer is an architecture where the AI is never the one making decisions.
The AI turns "my husband died" into an understanding of a bereavement life event. But whether the citizen is eligible for Bereavement Support Payment, whether their data can be shared with DWP, whether probate requires registration first — all of this is evaluated by deterministic code, using rules published by departments.
Consent is a hard technical gate, not a checkbox the AI can skip. The code that enforces consent sits outside the AI entirely. No prompt, no edge case, no system error can cause data to be shared without the citizen's explicit approval.
Every step the agent takes is recorded in an immutable evidence store: what it did, what data it accessed, what consent was given, what was submitted, and what the outcome was. If something goes wrong, you can replay the entire journey step by step.
The citizen app is one agent. But the architecture is designed for a world where it is not the only one.
Today, the citizen app is the trusted government agent that acts on your behalf. But the service artefacts that departments publish — the manifests, policies, state models, and consent requirements — are not locked to a single app. They are structured, open descriptions of government services that any authorised agent could read.
That means the future could look like this:
The default, built and maintained by government. Designed for trust, accessibility, and universal access. Always available, always free.
An AI assistant you already use — ChatGPT, a future personal agent, or an accessibility-focused tool — could be authorised to interact with government services on your behalf, through the same artefacts.
A bereavement charity, a legal aid provider, or a carer's organisation could build agents tailored to the people they serve — agents that know the right questions to ask and the right tone to use.
This is the same model that made the web work: government publishes information in a standard format, and any browser can render it. Here, government publishes service logic in a standard format, and any agent can deliver it. The citizen's choice of agent becomes as natural as their choice of browser.
Government services that understand you,
not the other way around.
The Agentic Legibility Stack is an operating model for the next generation of government services. Two products — one for citizens, one for departments — that together create a world where AI agents can act on behalf of citizens, safely, accountably, and with the citizen always in control.
Agentic Legibility Stack · March 2026