Skip to main content

Do public bodies have a duty to ensure LLM answers are correct?

Do public bodies have a duty to ensure LLM answers are correct? 

Do public bodies have a duty to ensure LLM answers are correct? 

AI —

Jump to content
stylish image of the houses of parilment form across the thames.

The public is getting answers about your organisation from AI. If those answers are wrong, whose job is it to fix them?

If information on your website is incorrect, someone in your organisation fixes it. But when an AI chatbot confidently tells the public something wrong about your services, who fixes that? Right now, the answer is: nobody. And that’s a problem, because a growing body of research shows that AI tools are already getting it wrong on public services, and millions of people are relying on them for answers. 
 

Gartner predicts traditional search engine volume will drop by 25% as users shift to AI assistants. Research from Similarweb found that 69% of Google searches now end without a click to any website. ChatGPT alone receives over five billion monthly visits, making it the fourth most-visited website globally. When someone asks an AI tool about your organisation, the answer they receive may not even be based on information you have published. 
 

This raises a question that most public bodies haven’t yet grappled with: if the information an AI gives about your organisation is wrong, whose responsibility is it to fix it? 

The ‘it’s no one’s job’ problem

If information on your website is wrong, there’s a clear line of responsibility. Someone in your web team owns the page, responds, and fixes it. Content governance systems spring into action. The correction is live within a few days or even hours. 
 

If the same information is wrong in an AI’s response, there’s a collective shrug. The web team didn’t write it. The AI company didn’t author it in the traditional sense. The answer was generated from a patchwork of sources, some current, some outdated, some misinterpreted. So the inaccurate answer just sits there, being confidently delivered to anyone who asks. 
 

Research from the Open Data Institute, published in February 2026, found that popular LLMs are unable to provide reliable information about key public services such as health, taxes, and benefits. Drawing on more than 22,000 prompts, the ODI found that chatbots rarely admitted when they didn’t know the answer and eagerly attempted to respond to every query even when their responses were incomplete or wrong.  
 

When millions of people are getting answers about government services, university courses, and charitable support from AI tools, it’s not sustainable to simply shrug and say it’s no one’s responsibility.  
 

Of course, the model providers should be doing more to prevent incorrect or over-confident answers, but the brutal reality is that users are turning to these tools in droves. Ignoring this shift and refusing to engage with AI models would be burying your head in the sand. Whether it’s right or wrong, the simple fact is that public bodies must rise to this challenge to ensure the public gets accurate information.  

The responsibility of public bodies

We believe public bodies have a responsibility to provide clear, structured, and accurate information in response to the key questions the public asks about their organisation, in a way that allows AI tools to deliver accurate answers. 
 

This isn’t about controlling what AI says. It’s about making sure the raw material it draws on is as good as it can possibly be. The model companies have their own obligation to correctly identify and prioritise the answers of institutions that are the genuine owners of the truth. But they are private, for-profit companies, and we cannot rely on their goodwill or technical capabilities alone to solve this problem. 
 

It never worked with SEO to sit back and say, ‘we’re the official regulator, we don’t need to think about our search rankings.’ That passivity meant many authoritative organisations were outranked by less reliable sources for years. As Harvard Business Review recently argued, organisations must now shift from optimising pages for clicks to engineering recall inside AI systems. Generative Engine Optimisation will follow the same pattern as SEO: if you don’t proactively ensure your content is feeding accurate information to models, someone else’s less accurate content will fill the gap. 

Keeping information up to date

Traditional search pointed people directly to your website. When you changed something, users saw it immediately. AI search introduces a lag. The AI tool may have ingested your content weeks or months ago, and the answer it gives today might be based on information that has since changed.  
 

Take the introduction of voter ID requirements in England. The Elections Act 2022 meant that, for the first time, voters in Great Britain needed to present photographic identification at polling stations. If an AI chatbot’s training data predates this change, it could confidently tell someone they don’t need ID to vote. Getting this wrong could cause real harm. 
 

The same risk applies to university course fees, benefit eligibility criteria, planning regulations, and countless other areas where institutions publish time-sensitive information.  
 

So what can you do about it? First, make sure you’re not making the problem worse. If your website is blocking AI crawlers, as Cloudflare now does by default for new domains, you are actively preventing models from picking up your latest updates. Second, use clear timestamps and structured data on pages that change frequently, so crawlers can identify what’s current. Third, and most importantly, monitor the answers. After any significant policy or information change, test what the major AI tools are saying. If they’re still serving the old answer, you know you have a problem to address, whether that’s through your technical markup, your crawler access settings, or the depth and clarity of the updated content itself. 

What public bodies can do

Start by understanding the scale of the issue. Ask yourself “how can I ensure the information a user may ask is accurate in an LLM?”. Search for your organisation and its key services in ChatGPT, Perplexity, and Google’s AI Mode. Are the answers accurate? Are they up to date? You may be surprised by what you find.  
 

From there, focus on making your content as easy as possible for AI tools to find, understand, and accurately represent. Structure your content around the questions people actually ask, not internal terminology. Invest in structured data markup like JSON-LD so AI tools can correctly interpret your content. Keep your content current, because outdated pages don’t just confuse human visitors, they pollute the models that AI tools draw from. And ensure AI crawlers can access your site, because if they can’t read your latest updates, they can’t deliver accurate answers. For a detailed guide, see our article on how to get your content found by AI search engines
 

Public bodies have always had a duty to make accurate information accessible to the people they serve. That duty hasn’t changed. What has changed is the channel through which people receive that information. Your website is no longer the only destination. It is increasingly the source material from which AI tools construct answers. The organisations that recognise this early will be the ones whose information reaches the public accurately, wherever and however they choose to search for it. 
 

If you’d like to understand how AI tools are currently representing your organisation and develop a strategy to ensure your content reaches your audience accurately, get in touch

Person holding pen drawing diagram (decorative)

Generative engine optimisation

We give your teams the knowledge and training to dramatically increase your results from AI search.