How AI is making its way into Government
Exploring AI's role in government: from sanctioned projects to unapproved staff use, and its creeping integration into everyday tools.

- David Gérouville-Farrell
- 4 min read

I’ve been thinking recently about how to frame the ways that (generative) AI is going to make its way into enterprise and the public sector. A recent podcast interview between Professor Alan Brown and Think Tank touched upon this.
Alan spent nearly two decades in the USA driving large-scale software programmes and leading R&D teams. After roles at Carnegie Mellon University and as an IBM Distinguished Engineer, he’s now focused on digital transformation. In addition to his research at Exeter, he’s recently published “Surviving and Thriving in the Age of AI” which aims to help digital leaders navigate the challenges and opportunities of AI.
One thing Alan mentioned stuck with me: the discussion around digital transformation often centres on trade-offs between Risk, Trust, and Value. While this framing is crucial to how governments (should) approach new technologies, it captures only part of the picture. Conversations about AI in government often miss the messier realities of how AI is actually making its way into the public sector. Formal, large-scale projects—complete with governance and guiding principles—draw all the attention, while quieter, more pervasive uses often slip under the radar. Before you realise it, you’re already swimming in AI.
I thought Alan’s framing of the reality of how AI is making its way into Government rang quite true.
Large-Scale Formal Projects
The first involves large-scale projects requiring significant resources—think facial recognition at borders or fraud detection across tax returns. These could be small 6 month projects or large investments, spanning three to four years, with formal requirements and extensive governance, rollout plans, change champions, etc.
This is the ‘devil we know’ - yes, things are a wee bit different perhaps in the context of generative AI but we have a lot of knowledge about managing large-scale IT projects, much of which transfers to this new context. It’s new, but it’s not new new.
So in some ways, “how will AI be used in government” can be answered along the lines of the same way any other new technology is used - with a user centric, service design oriented methodology to provide high quality, citizen focused products and services. Duh.
Informal Use and Shadow AI
The second way is through informal use, where staff might run queries through ChatGPT (or one of the myriad alternatives) to get outputs - for a whole variety of contexts and use cases and entirely ‘off book’. This raises important questions, particularly around issues of data governance and accuracy. Hallucinations (where the system produces inaccurate information) might not happen that often, but the consequences can be serious when they do.
Government is by default risk averse (rightfully so) but this downside should be considered against what currently happens.
Are we confident that staff are always reading complex case notes in detail? What is worse, a person not engaging with the material, or the material possibly containing a hallucination? I don’t think the answers are easy here.
The default stance of risk averse organisations may be to discourage (or ban) such use of AI. However, informal use happens regardless of oversight, bringing a pressure to focus on risk mitigation rather than outright prohibition. I’m reminded of desire paths where people ignore the designed paved route in favour of cutting across the grass.
Embedded AI in Commercial Products
The third way is perhaps the most subtle: AI integration into commercial products. From email autocomplete to meeting summaries, to the “summarise this email” button that has appeared in my Gmail, AI is becoming embedded in our everyday tools (whether or not we asked for it). This integration makes it increasingly difficult to control or implement policies against AI use.
I think I think
Frameworks often propel us into action—they help us focus, prioritise, or tackle challenges from a specific angle. But sometimes their value lies not in driving action but in helping us see and name what’s already happening around us.
This framework, with its three simple categories, provides a way to acknowledge the current reality of AI adoption in government and the public sector. By naming and categorising these patterns, we can move past the illusion of total control and instead focus on understanding what’s actually unfolding. The first step to making effective decisions is recognising and living in this reality.
- Tags:
- Ai
- Government
- Podcast