A moment of doubt
You're reviewing yet another consultancy's analysis of AI transformation... The deck claims "87% of enterprises report successful AI deployment within 12 months" and cites three academic papers to support it.
The statistic feels a bit too neat, so you try to verify one of the citations. The paper doesn't exist. One of them seems to be on an entirely different topic. The entire report reads smoothly and sounds authoritative, built entirely on fabricated data. Perhaps an exaggeration for dramatic effect, but ‘based on real events’.
We’ve all experienced hallucinations and dubious citations, often leading to embarrassing headlines and costly mistakes. A piece from Mashable earlier this year highlighted how 120 court cases have been caught with hallucinations, and over 20 legal professionals getting their knuckles wrapped.
These hallucinations arrive in confident language, properly formatted, and with an authoritative tone. Distinguishing between genuine expertise and algorithmic content generation is becoming increasingly difficult and is eroding trust.
Why this happens, and the impact it has
AI models predict probable answers from training patterns. They don't understand truth and they don’t ‘look up’ things like a database or search engine would. When they lack information, they tend to fill gaps with statistically plausible text, and almost never admit uncertainty. No one likes to be wrong, and AI always assumes it’s right until you tell it otherwise.
Before we properly understood what generative AI was really doing for us, many blindly accepted AI outputs without proper scrutiny. Every time we accept an answer, we’re reinforcing the model’s idea that this was a good answer. Every time we allow an unchecked error out into the world, we also pollute the training data for future models.
This fuels the process, and compounds errors. Truths get lost, inaccuracies get regurgitated and rise to the surface, and become citations themselves.
The MIT study, When AI Gets It Wrong highlights several reasons for inaccuracies and biases, though in short conclude the issues ‘result from the nature of their training data’.
In defence of the LLMs, they’re only doing what’s been asked of them. Ask for three citations, and they'll generate three. Request a specific statistic, and they'll provide one. The models prioritise completing tasks to the user’s satisfaction over accuracy, creating outputs that read professionally but could be fundamentally unreliable.
Credibility crisis
In October 2025, Deloitte Australia was forced to refund $290,000 (Fortune) after a government report was found to contain fabricated academic references, nonexistent footnotes, and a false quote from a Federal Court judge, all generated by AI. The firm only disclosed their use of AI after getting caught.
This was process failure at a major consultancy. Someone generated content, skipped verification, and submitted it to a government client whose decisions affect millions of people. The report went live in July (Business Standard). Yet no one was caught it until an external researcher flagged the hallucinations.
Consultancies are quietly using AI to manufacture expertise at scale, presenting it as human insight. They're not admitting this because they know it undermines their value proposition. Every piece of synthetic thought leadership pretending to be human expertise erodes trust, not only for that specific firm, but for the entire advisory world.
This matters for those making major transformation decisions because you need partners who've actually done the work. Who understand the nuance. Who can spot the gap between theory and implementation, and who’ll tell you the truth.
Most consultancies are almost certainly using AI. The question is, are they're being honest about it, and are they verifying what they deliver?
Why transparency isn't optional
The consulting industry is developing a playbook for AI, based on generating outputs faster, but charging the same fees for those outputs.
This approach has a shelf life, with enterprise leaders already noticing. When Horses for Sources surveyed 505 enterprise leaders in 2024, 44% cited lack of transparency in AI-driven decisions as their top concern. Another 32% identified risk of inaccurate outputs as a primary worry.
Enterprise leaders aren't rejecting AI, and are becoming more savvy to the fact that they’re paying premium rates, and maybe not always getting the value they should.
What transparency should look like
Transparency with the use of AI comes in many forms, though when it comes to thought leadership, transparency demonstrates how you use technology to enhance our expertise, rather than simply replace or speed it up. Not only is it about being honest, but it also flags to others as to what they might choose to check or validate themselves.
Every piece of Valliance content starts with human curiosity anchored around our core themes. We use AI to help broaden our research, dig deeper into topics, push us in new directions of thought, or to give us challenges that will help us make it better.
It means being specific about what AI did and what us humans did, with clear attribution.
"Research accelerated using Claude and Perplexity to analyse 47 enterprise AI implementations" tells readers we used AI to process volume, but humans framed the questions and interpreted the findings.
"Structure refined with AI support to improve readability whilst preserving author perspective" shows AI helped with mechanics, not thinking.
"Market analysis enhanced through Claude's synthesis of 23 regulatory frameworks across EMEA" demonstrates AI helped aggregate information that humans then validated and contextualised.
We read, understand, reshape outputs when needed, and maintain accountability for every word. In each of the cases above, the use of AI made the work itself more valuable than it otherwise would have been. The scope of inquiry afforded by those tools for example, just wouldn’t have previously been possible unless thousands more billable hours were invested.
The human-AI partnership
When we use AI for research, we verify sources, check and check again. The human critical thinking is key to ensuring we’re not just putting out slop and regurgitating what we don’t understand (or is factually inaccurate).
As part of this article I’d asked Claude to gather some research and stats on the credibility crisis. The numbers and links came back, some felt initially dubious, some needed proper verification and following 2, 3 or 4 links to the source of truth. Sometimes it can requires a little more leg work than the instant answers we’ve become used to, but it’s worth it. Our new AI tools help us work faster, but they can't replace a little human judgement or diligence.
The future workforce needs people who understand how to use AI effectively. Demonstrating exactly how this is done is surely going to become an asset to individuals and consultancies.
What this means in practice
Our guidance for using AI in thought leadership:
Never simply copy and paste. Every AI output gets human interpretation and refinement.
Always read and understand. We own the ideas, regardless of how they were researched.
Use for enhancement, not replacement. Grammar, structure, and research support. Not thinking.
Challenge the output. Question AI suggestions against our expertise, experience, and client reality. AI isn't always right, and neither are we.
Cite and verify. Especially when AI accelerates research into new domains
Go to trusted academic, peer reviewed sources. Writing this piece alone led me down a few rabbit holes of misinformation.
Be specific about tools. "Used Perplexity for market analysis" tells readers more than "AI-assisted." We’ve started doing this within our thought leadership - see top of the page.
The result
Readers get authentic intellectual partnership. As a team, we’re able to build credibility and strengthen trust, which in turn leads to more meaningful opinions. We demonstrate what AI-native means - Humans leading, technology supporting, transparently.
Whilst others are quietly using AI, pretending humans wrote everything, transparency becomes an advantage for Valliance. It's our response to the credibility crisis.


















