I’m pretty new to AI.
Aren’t we all? Aside from the progenitors and the real pioneers, the majority of us are still only just getting started. And we’re not even a majority; we’re the informed few.
A recent UK government survey conducted in early 2024 suggested that while 73% of the public have used AI in their day-to-day lives, only 36% have used it in their workplace, and 17% could explain what AI is in detail. Globally, the gap is far more stark. Adoption is rising fast, both here in the UK and globally - but it’s all still very new.
Global adoption of generative AI tools reached 16.3 percent of the world’s population, up from 15.1 percent in the first half of 2025, a meaningful gain for technologies still in their early years - AI Diffusion Report 2025, Microsoft
Last year, as my LinkedIn feed exploded with AI-related articles and opinions, I was definitely a sceptic. I felt like an outside observer, looking in at a world filled with hype and excitement about the possibilities, but with little concrete to show for it. I’ve been working in software engineering for 15 years, across retail commerce and fintech, so much of what filled my feed was about the coding side: stories about vibe coders spinning up SaaS clones in hours or days, and pushing to monetise their new creations quickly. Usually a few days later, a follow-up appeared with screenshots from Twitter threads of the creator asking “how did this person get access to use my app for free??” and “what’s a SQL injection?”. The constant blunt-force AI-ifying of products I use every day did nothing to engender my goodwill towards the technology either - no, a little sparkle icon doesn’t make me want to use your product more.
I’ll indulge in a bit of navel gazing for a moment. I can definitely be a pessimist. Maybe it’s natural; maybe it’s years of hedging my bets in estimation sessions trying to factor in that critical requirement which no-one has mentioned yet but you can sure as hell bet is going to pop up out of nowhere on the second-to-last sprint demo. For me to buy into the dream, the excitement, the hype, I need to see it. But once I’m there, I’ll evangelise ‘til the cows come home.
So it was with a fair bit of surprise and interest when Valliance popped up on my LinkedIn feed, and in particular, the involvement of a couple of very clever former colleagues who I greatly respect. Speaking with them, exploring the topic more widely, and experimenting with various LLMs brought me to several conclusions.
LLMs (1) are incredible. The ability to gather and synthesise information, feed it back to the user, and then perform actions like building a slidedeck or writing a database schema, is awesome.
LLMs are stupid. You know the issues - hallucinations, assumptions, lost context, accidentally deleting a Production table which it had been given write access to.
LLMs are terrifying. See also: Project Glasswing (2). The potential for malicious use of these technologies is real.
It was a bit like staring into the Ark of the Covenant. My brain melted a bit. I had some long, introspective walks, mentally wrestling with this new future. What it means for me, my family, society, the world. Suddenly, I felt I understood some of the hype: this really is a world-changing technology. But what world will it create?
In the face of all of this, it felt instinctive, tempting, almost natural, to recoil away. To not want anything to do with AI; I could be “one of the good guys”, doing things “the old fashioned way”. If I closed my eyes and stuck my fingers in my ears, that terrible future might not happen. Or it might not affect me. Or I might be able to run away from it, flee to a distant mountain top or remote tundra and carve out a life free from the malign influence of this terrifying creation of humanity.
Or…or I could try and help.
“Even the smallest person can change the course of the future” - Galadriel, The Lord of the Rings: The Fellowship of the Ring (film(3))
That reads a bit hero complex-y. It’s not meant to be. I don’t have any illusions that I as an individual can play more than a very, very small role in shaping the future of humanity’s coexistence with AI. But I fundamentally believe that, small as my impact is, it can make a difference. We all can.
Here’s the thing about running away from the problem. If the people who see the dangers coming just run, who’s left? The over-optimists. The apologists. The “just ship it” crowd. The founders who think “SQL injection” is something to get before your next tropical holiday. It feels principled to walk away, but it’s actually the thing that guarantees the worst possible outcome - because it hands the steering wheel to whoever is least worried about where the car ends up.
Some academics are thinking this way too. In “The AI Layoff Trap”, Brett Falk and Gerry Tsoukalas argue that firms will keep automating to such a point that it becomes destructive to the market, wrecking the very consumer demand and spending power which they rely on. Not out of malice or stupidity, but because it’s what the competition is doing. If you don't automate, your competitor does, and you lose. Individually rational, collectively ruinous. Which means "just let the market sort it out" isn't a plan. The market, left alone, could drive the car off a cliff.

“You are my creator, but I am your master; obey!” - The creature, to Victor Frankenstein
A different path has to come from somewhere other than individual profit motive. Yes, regulation might help, but it’s the people actually building and buying and deploying this stuff who have the power to make choices. People like, well, me. And maybe you.
For me, the choice worth making is to steer towards AI which improves the lives of workers, not does them out of a job. I've watched LLMs unpick problems in moments which I'd been chewing on for an hour, turn high-level architectural descriptions into solid implementation, and give people back days of their week spent on digital filing work. Useful, in a quiet way. There's a version of AI worth building here: one that gives expertise to people who couldn't previously afford it, that makes hard work less grinding, that lets people do more of the parts of their jobs they actually care about. That's a less dramatic future than either the doomers or the utopians are selling. It's also, I think, a genuinely better one - and one we can actually build, if enough of us decide to.
So that’s how I find myself here, at Valliance. If you had told me a year ago that I would now be working for a company putting AI to work, I would probably have been confused. I’m still confused about how I feel about AI. But what I am certain of in absolute terms is that no matter what shape the influence of AI takes in the future, we as humans are responsible for it. It won’t be AI’s fault if over-automation kills consumer spending power. It will be the fault of humans.
I believe that AI has the potential to have a fundamentally positive transformative effect on the lives of people around the world. It also has the potential to end the world as we know it. Where we end up is on us…and that’s why I’m at Valliance.
Which future wins? You - I - we decide.
Footnotes:
(1) Consider the term LLM here to be a stand-in for any Transformer model. Except possibly Optimus Prime. Although you can get Large Models of him…
(2) Putting aside the debate of how much is real and how much could be over-enthusiastic marketing!
(3) Yes, it’s not actually in the book. Still a cracking quote though.

















