Valliance logo in black
Valliance logo in black

13 predictions for 2026, and why at least one of us is probably wrong

Jan 20, 2026

·

2 Mins

Jan 20, 2026

·

2 Mins

Jan 20, 2026

·

2 Mins

Jan 20, 2026

·

2 Mins

Topics

Enterprise AI

Enterprise AI

Enterprise AI

IT Strategy

IT Strategy

IT Strategy

Leadership

Leadership

Leadership

Thought Leadership

Thought Leadership

Thought Leadership

AI Transparency

AI Transparency

AI Transparency

Every week, we come together as a team to learn, share thinking and challenge each other. We don't always agree, and that's a good thing. These are our individual predictions for 2026 and beyond, organised around the themes we keep coming back to: Agentic Systems, Ontologies, People First, Trust, Future Enterprise and Evolving Enterprise Technology.

_The Future Enterprise

Our focus on the future enterprise, where we believe AI can amplify human expertise at machine speed through unified data, adaptive processes, and collaborative decision-making.

1 - AI-native graduates become disproportionately valuable, whilst universities pivot to ‘critical thinking’

As AI moves from a helpful add-on to something foundational in how work gets done, younger graduates who've grown up with AI as a given actually become more valuable, not less - contrary to the flurry to cull entry level jobs at big companies.

They've learned, built things, and solved problems with AI as their default, making them a natural fit for workplaces where AI is baked in from the start. But here's what changes the game: higher education will finally realise their real superpower is teaching critical thinking. The 2025 WEF Future of Work report highlighted that AI requires much more emphasis on human judgment, not simply 'AI Skills'.

As more companies realise that critical thinking, knowing when to trust, override, or augment AI is the skill they actually value, universities will pivot hard, making it their core selling point rather than a happy accident. The graduates who combine AI fluency with genuine critical thinking will be the ones who thrive, as will the companies who put their faith in them.

_Cian Clinton, AI native graduate and Value Engineer at Valliance

2 - Small Language Models Achieve Production Dominance

A small language model (SLM) is trained on focused, domain-specific data rather than the entire internet, making it faster, cheaper to run, and more controllable than a large language model (LLM). By the end of 2026 SLMs (sub-7B parameters) will handle 60-70% of enterprise AI workloads.

Edge deployment requirements for latency-sensitive applications, NPU (Neural Processing Units) proliferation in consumer and enterprise hardware (Apple, Qualcomm, Intel), and cost arbitrage becoming untenable will drive this shift. Frontier model inference costs simply won't drop fast enough to mitigate the increasing token usage costs inside enterprises.

Distillation and fine-tuning techniques now reliably transfer 80%+ capability to compact architectures. Privacy and data residency mandates make local inference non-negotiable for regulated industries. Therefore, smaller and self-hosted models will proliferate.

_Dom Selvon, Value Partner at Valliance

_Trust

Our views and thinking around how engineering through transparent systems, open dialogue, and outcomes build trust over time.

3 - Trust, safety, and governance will evolve to become a core product strategy

Trust and safety become primary buying criteria as end users, as they learn more about how AI works, are becoming more wary and more selective about the products they engage with. As a result, trust and safety won't be something organisations can add later, they'll need to be built into the foundation of the product to succeed.

You can already see this in dating and community-based apps: if users don't feel safe or don't trust how the platform works, they simply don't engage.

As expectations rise, products will need to be much more intentional about how trust is designed, communicated, and maintained. It will be less of a defensive measure, it will become a core driver of adoption and retention.

_Julia Sicot-Carr, Value Consultant at Valliance

4 - Trust Over Intelligence

In 2026, organisations will stop evaluating AI based on how smart it appears and start evaluating it based on whether it can be trusted in real operational conditions.

After years of impressive demos and ambitious pilots, enterprises are far more concerned with how systems behave under pressure than how clever they look in ideal scenarios.

The central questions are shifting from “what can it do?” to can we explain it, can we control it, and can we prove it behaved correctly?” You can already see this change taking hold: teams are asking for decision traceability, risk functions are insisting on clear override and escalation paths, and architecture reviews are focusing less on model accuracy and more on failure modes and recoverability.

Trust is no longer a defensive requirement added at the end. It is becoming the primary criterion by which AI systems are designed, evaluated, and bought. 

_Alec Boere, Value Partner at Valliance

5 - Marketing and Customer Experience teams everywhere will freak out over the EU AI Act

Article 5(1) bans "unacceptable" AI like harmful subliminal or deceptive manipulation, exploitation of vulnerable people, and punitive, cross-context social scoring.

However, it contains broad words like "manipulative" and "distorting behaviour" which will lead to risk-averse teams instinctively lumping in adaptive chatbots, conversion-optimised journeys and AI-driven segmentation, even though the provision is clearly aimed at dark-pattern systems that seriously undermine autonomy or cause significant harm, not routine marketing optimisation and loyalty programmes.

This is why our People-First enablement approach has governance as a key pillar because alongside understanding the limitations and opportunities of AI, understanding the intent of legislation and regulation is key to being able to be effective, and make progress, but remain safe.

_Paul Dawson, Value Partner at Valliance

_Evolving Enterprise Technology

Our views on the evolving technology landscape, where AI adoption challenges existing systems and unified approaches unlock potential.

6 - Coding tools will become more "Teammate" than "Tool"

The next phase of AI-driven software development will see LLMs evolve from tools into true pair programmers. Beyond one-off prompts and “vibe coding,” agentic systems will work alongside engineers throughout the lifecycle, helping clarify requirements, generate and review code, diagnose and resolve bugs, and reason about security and reliability. Rather than automating isolated tasks, these systems act as always-available teammates, collaborating in real time as part of the engineering workflow.

_Ronan Forker, Value Engineer at Valliance

7 - Prompted prototypes to production and beyond

Experience design tools are evolving faster than teams can develop competency. By the time one platform is mastered, three more have launched. Leaders need to help teams build transferable skills. Prompt craft, basic development literacy, and more emphasis on design systems thinking. In 2026 we'll see greater focus on the underlying structure of design files.

Semantically correct design toolkits, libraries and systems will be key to ensuring LLMs interpret design meaningfully, consistently and at scale. As designers our skill sets will broaden and we'll see bigger design paradigm shifts as people's interactions and expectations with technology continue to change.

As we find a balance of AI usage in design, I predict we'll see the emergence of visual languages that feel deliberately crafted or analog rather than the more glossy veneer of AI-generated assets. Importantly, there will also be greater emphasis on transparency in what's created, in our writing, thought leadership, as we look to instil a greater sense of trust and human touch.

_Dan Bradshaw, Value Consultant at Valliance

_People First

Our thoughts around how AI can enable teams not replace them, and where humans in the loop ensure AI works better for us all.

8 - Without focus, the AI gap will widen

AI at work is already benefiting the higher-skilled, better-paid far more than the lower paid, and that gap is set to grow in 2026. Studies show knowledge workers get tools that boost productivity and pay, while frontline staff more often get AI for monitoring and micro-management, reducing their autonomy and reinforcing existing inequalities.

Companies must recognise that innovative and profitable use of AI can originate anywhere, not just from those already highly paid and productive, and that the tools themselves make it more likely that innovative thinking will develop all throughout the organisation's hierarchy. Enterprises need to enable access at all levels, and actively seek out uses of AI that doesn't just surveil, but augments lower paid roles, to make work more fulfilling and rewarding, which ultimately drives productivity, quality and customer satisfaction.

Valliance is committed to closing the AI gap, so urges companies to also consider leveraging their corporate licensing muscle to offer personal AI usage to employees as an employment benefit to widen inclusion and the positive benefits of AI.

_Paul Dawson, Value Partner at Valliance

9 - GEMINI vs COPILOT - two opposing predictions!

At Valliance, the goal is learning and progress. This means we have to foster a culture of healthy challenge and bold opinions. We think the following brings this to life quite nicely.

First, predicting Google to win in the enterprise space this year;

Gemini Enterprise is currently the only enterprise AI productivity platform we would actively recommend. Meanwhile, Microsoft's strategy of embedding AI into every surface without user consent mirrors its failed Cortana approach, creating active user resentment and enterprise IT departments push back against forced licensing bundling.

If it continues like this CoPilot usage will stagnate, as users react against its constant bundling into everything that is Microsoft and enterprise leaders seek single platforms that allow good AI use to proliferate in their enterprises.

Google's distribution advantage across 2.5 billion Android devices and its entire ecosystem eliminates the integration friction that plagues competitors, while its TPU infrastructure delivers cost advantages that OpenAI cannot match.

_Dom Selvon, Value Partner at Valliance

Then the counter perspective backing Microsoft;

We might have seen pushback on the bundling of M365 CoPilot licences, and the omniscience of that CoPilot button in the M365 suite, but at the end of the day, an AI assistant that has access to multiple models, and the ability to fully integrate into the rest of your productivity stack is incredibly compelling.

Hundreds of thousands of public sector workers in the UK are starting to see this as they get given CoPilot chat access for free at work. We've also seen Microsoft be incredibly open to working with multiple model providers which no other enterprise provider does, and of course it has Power Automate that works well alongside, allowing integration to all sorts of enterprise platforms.

Although Gemini Enterprise is currently the only platform that we at Valliance believe is fully featured enough to drive desktop AI usage in a large Enterprise, if Microsoft can deliver the type of analytics that Google have done so well, they will hit that note too, and then use their massive licensee base as a springboard to create a leapfrog motion.

_Paul Dawson, Value Partner at Valliance

_Ontologies

Our thinking on ontologies, the semantic layer that ensures agentic systems act with context, consistency, and accountability.

10 - Knowledge Over Models

By 2026, many AI initiatives will stall not because models are weak, but because organisational knowledge is. As AI moves into real decisions, retrieval exposes a harder truth: information is fragmented, outdated, and defined differently across teams. Confident answers collide, and trust disappears fast.

This is where ontologies stop being theoretical and become practical. They act as a stabilising layer in the AI stack, making core concepts explicit and shared, so organisations can define risk categories, explain decisions, show consistency, demonstrate traceability, and prove where guidance does and does not apply. Instead of each system or team inferring meaning independently, the organisation decides what things mean and under what conditions they are valid.

Leaders are already talking less about which model to use and more about who owns knowledge, which definitions apply, and what happens when information is wrong or outdated. In 2026, AI won’t fail because it isn’t smart enough. It will fail because organisations don’t know what they know, or can’t trust it.

_Alec Boere, Value Partner at Valliance

_Agentic Systems

Our thinking on agentic systems, the autonomous entities augmenting human decision-making with context-aware action and accountability.

11 - Operational Agents, Not Autonomous Ones

Agentic AI will become far more practical and far less dramatic in the eyes of businesses and users. Enterprises will move away from fully autonomous agents, focusing instead on agents with clearly defined roles, boundaries, and responsibilities. The ambition moves away from replacing human judgement to supporting it in specific, repeatable ways.

Agents will actually operate within clear decision envelopes. They'll know what they can decide, when to pause, and when to escalate. Ground truth will be explicit, thresholds defined up front, handover points will be designed and not discovered through failure. This shifts agentic AI from unpredictable to dependable.

"Agentic" will stop being a buzzword and become a design pattern. Less about autonomy, more about structuring work so humans and machines complement each other. This mirrors how organisations already function: work is governed by roles, controls, and accountability. AI must fit that reality to scale.

This approach lowers the emotional barrier to adoption. Teams work more willingly with agents they understand and can intervene in. Over time, adoption becomes incremental and defensible; not a leap of faith, but confident steps toward automated operations.

_Alec Boere, Value Partner at Valliance

12 - Persistent Memory Becomes Table Stakes

All major foundation model providers ship native memory/personalisation by Q2 2026. Memory architecture becomes a differentiating capability, meaning that what users do and say will be available across chats, and over time make the tools more personalised to their specific usage.

Single-session interactions are fundamentally limited for high-value use cases. Enterprise deployment requires continuity across interactions including support, advisory, and operational contexts. Technical barriers solved: retrieval architectures mature, context windows expanded, summarisation for memory compression reliable. Competitive pressure: if one provider has it, all must follow.

_Dom Selvon, Value Partner at Valliance

_And finally…

13 - Someone will announce they've created Artificial General Intelligence, even though they haven't…

The term Artificial General Intelligence / AGI (when AI thinking becomes on par with human beings) has been discussed by major players in recent years, likely paving the way for announcing its creation by priming the market. Let’s be honest though, this will be more of a marketplace flinch test than a scientific proof of AGI creation.

This year, one of the frontier or independent companies will announce AGI creation, sparking discussions on its implications for business and society. Many will argue it's not really AGI, while others will discuss how to define and test AGI. YouTube videos, LinkedIn discussions, articles, reactions, interviews and lots of noise will follow.

Whether they've actually created AGI will become secondary as others will quickly claim to have done it too, generating more noise and investor speculation. The term itself will evolve from it’s etymological scientific origins to simply being a product name anyone can claim. Away from the noise, frontier models will likely continue with small fixes, updates, and performance improvements - and will continue to focus more on large organisational adoption or sovereign-adoption of their platforms.

_Tom Moran, Value Consultant at Valliance


Which of these will reshape your enterprise first? Let's navigate it together. And finally, to misquote The Go-Between… “the future is a foreign country”, and none of us actually know what’s coming, and are continually surprised by what actually happens!


Themes

Topics

Enterprise AI

Enterprise AI

Enterprise AI

IT Strategy

IT Strategy

IT Strategy

Leadership

Leadership

Leadership

Thought Leadership

Thought Leadership

Thought Leadership

AI Transparency

Every week, we come together as a team to learn, share thinking and challenge each other. We don't always agree, and that's a good thing. These are our individual predictions for 2026 and beyond, organised around the themes we keep coming back to: Agentic Systems, Ontologies, People First, Trust, Future Enterprise and Evolving Enterprise Technology.

_The Future Enterprise

Our focus on the future enterprise, where we believe AI can amplify human expertise at machine speed through unified data, adaptive processes, and collaborative decision-making.

1 - AI-native graduates become disproportionately valuable, whilst universities pivot to ‘critical thinking’

As AI moves from a helpful add-on to something foundational in how work gets done, younger graduates who've grown up with AI as a given actually become more valuable, not less - contrary to the flurry to cull entry level jobs at big companies.

They've learned, built things, and solved problems with AI as their default, making them a natural fit for workplaces where AI is baked in from the start. But here's what changes the game: higher education will finally realise their real superpower is teaching critical thinking. The 2025 WEF Future of Work report highlighted that AI requires much more emphasis on human judgment, not simply 'AI Skills'.

As more companies realise that critical thinking, knowing when to trust, override, or augment AI is the skill they actually value, universities will pivot hard, making it their core selling point rather than a happy accident. The graduates who combine AI fluency with genuine critical thinking will be the ones who thrive, as will the companies who put their faith in them.

_Cian Clinton, AI native graduate and Value Engineer at Valliance

2 - Small Language Models Achieve Production Dominance

A small language model (SLM) is trained on focused, domain-specific data rather than the entire internet, making it faster, cheaper to run, and more controllable than a large language model (LLM). By the end of 2026 SLMs (sub-7B parameters) will handle 60-70% of enterprise AI workloads.

Edge deployment requirements for latency-sensitive applications, NPU (Neural Processing Units) proliferation in consumer and enterprise hardware (Apple, Qualcomm, Intel), and cost arbitrage becoming untenable will drive this shift. Frontier model inference costs simply won't drop fast enough to mitigate the increasing token usage costs inside enterprises.

Distillation and fine-tuning techniques now reliably transfer 80%+ capability to compact architectures. Privacy and data residency mandates make local inference non-negotiable for regulated industries. Therefore, smaller and self-hosted models will proliferate.

_Dom Selvon, Value Partner at Valliance

_Trust

Our views and thinking around how engineering through transparent systems, open dialogue, and outcomes build trust over time.

3 - Trust, safety, and governance will evolve to become a core product strategy

Trust and safety become primary buying criteria as end users, as they learn more about how AI works, are becoming more wary and more selective about the products they engage with. As a result, trust and safety won't be something organisations can add later, they'll need to be built into the foundation of the product to succeed.

You can already see this in dating and community-based apps: if users don't feel safe or don't trust how the platform works, they simply don't engage.

As expectations rise, products will need to be much more intentional about how trust is designed, communicated, and maintained. It will be less of a defensive measure, it will become a core driver of adoption and retention.

_Julia Sicot-Carr, Value Consultant at Valliance

4 - Trust Over Intelligence

In 2026, organisations will stop evaluating AI based on how smart it appears and start evaluating it based on whether it can be trusted in real operational conditions.

After years of impressive demos and ambitious pilots, enterprises are far more concerned with how systems behave under pressure than how clever they look in ideal scenarios.

The central questions are shifting from “what can it do?” to can we explain it, can we control it, and can we prove it behaved correctly?” You can already see this change taking hold: teams are asking for decision traceability, risk functions are insisting on clear override and escalation paths, and architecture reviews are focusing less on model accuracy and more on failure modes and recoverability.

Trust is no longer a defensive requirement added at the end. It is becoming the primary criterion by which AI systems are designed, evaluated, and bought. 

_Alec Boere, Value Partner at Valliance

5 - Marketing and Customer Experience teams everywhere will freak out over the EU AI Act

Article 5(1) bans "unacceptable" AI like harmful subliminal or deceptive manipulation, exploitation of vulnerable people, and punitive, cross-context social scoring.

However, it contains broad words like "manipulative" and "distorting behaviour" which will lead to risk-averse teams instinctively lumping in adaptive chatbots, conversion-optimised journeys and AI-driven segmentation, even though the provision is clearly aimed at dark-pattern systems that seriously undermine autonomy or cause significant harm, not routine marketing optimisation and loyalty programmes.

This is why our People-First enablement approach has governance as a key pillar because alongside understanding the limitations and opportunities of AI, understanding the intent of legislation and regulation is key to being able to be effective, and make progress, but remain safe.

_Paul Dawson, Value Partner at Valliance

_Evolving Enterprise Technology

Our views on the evolving technology landscape, where AI adoption challenges existing systems and unified approaches unlock potential.

6 - Coding tools will become more "Teammate" than "Tool"

The next phase of AI-driven software development will see LLMs evolve from tools into true pair programmers. Beyond one-off prompts and “vibe coding,” agentic systems will work alongside engineers throughout the lifecycle, helping clarify requirements, generate and review code, diagnose and resolve bugs, and reason about security and reliability. Rather than automating isolated tasks, these systems act as always-available teammates, collaborating in real time as part of the engineering workflow.

_Ronan Forker, Value Engineer at Valliance

7 - Prompted prototypes to production and beyond

Experience design tools are evolving faster than teams can develop competency. By the time one platform is mastered, three more have launched. Leaders need to help teams build transferable skills. Prompt craft, basic development literacy, and more emphasis on design systems thinking. In 2026 we'll see greater focus on the underlying structure of design files.

Semantically correct design toolkits, libraries and systems will be key to ensuring LLMs interpret design meaningfully, consistently and at scale. As designers our skill sets will broaden and we'll see bigger design paradigm shifts as people's interactions and expectations with technology continue to change.

As we find a balance of AI usage in design, I predict we'll see the emergence of visual languages that feel deliberately crafted or analog rather than the more glossy veneer of AI-generated assets. Importantly, there will also be greater emphasis on transparency in what's created, in our writing, thought leadership, as we look to instil a greater sense of trust and human touch.

_Dan Bradshaw, Value Consultant at Valliance

_People First

Our thoughts around how AI can enable teams not replace them, and where humans in the loop ensure AI works better for us all.

8 - Without focus, the AI gap will widen

AI at work is already benefiting the higher-skilled, better-paid far more than the lower paid, and that gap is set to grow in 2026. Studies show knowledge workers get tools that boost productivity and pay, while frontline staff more often get AI for monitoring and micro-management, reducing their autonomy and reinforcing existing inequalities.

Companies must recognise that innovative and profitable use of AI can originate anywhere, not just from those already highly paid and productive, and that the tools themselves make it more likely that innovative thinking will develop all throughout the organisation's hierarchy. Enterprises need to enable access at all levels, and actively seek out uses of AI that doesn't just surveil, but augments lower paid roles, to make work more fulfilling and rewarding, which ultimately drives productivity, quality and customer satisfaction.

Valliance is committed to closing the AI gap, so urges companies to also consider leveraging their corporate licensing muscle to offer personal AI usage to employees as an employment benefit to widen inclusion and the positive benefits of AI.

_Paul Dawson, Value Partner at Valliance

9 - GEMINI vs COPILOT - two opposing predictions!

At Valliance, the goal is learning and progress. This means we have to foster a culture of healthy challenge and bold opinions. We think the following brings this to life quite nicely.

First, predicting Google to win in the enterprise space this year;

Gemini Enterprise is currently the only enterprise AI productivity platform we would actively recommend. Meanwhile, Microsoft's strategy of embedding AI into every surface without user consent mirrors its failed Cortana approach, creating active user resentment and enterprise IT departments push back against forced licensing bundling.

If it continues like this CoPilot usage will stagnate, as users react against its constant bundling into everything that is Microsoft and enterprise leaders seek single platforms that allow good AI use to proliferate in their enterprises.

Google's distribution advantage across 2.5 billion Android devices and its entire ecosystem eliminates the integration friction that plagues competitors, while its TPU infrastructure delivers cost advantages that OpenAI cannot match.

_Dom Selvon, Value Partner at Valliance

Then the counter perspective backing Microsoft;

We might have seen pushback on the bundling of M365 CoPilot licences, and the omniscience of that CoPilot button in the M365 suite, but at the end of the day, an AI assistant that has access to multiple models, and the ability to fully integrate into the rest of your productivity stack is incredibly compelling.

Hundreds of thousands of public sector workers in the UK are starting to see this as they get given CoPilot chat access for free at work. We've also seen Microsoft be incredibly open to working with multiple model providers which no other enterprise provider does, and of course it has Power Automate that works well alongside, allowing integration to all sorts of enterprise platforms.

Although Gemini Enterprise is currently the only platform that we at Valliance believe is fully featured enough to drive desktop AI usage in a large Enterprise, if Microsoft can deliver the type of analytics that Google have done so well, they will hit that note too, and then use their massive licensee base as a springboard to create a leapfrog motion.

_Paul Dawson, Value Partner at Valliance

_Ontologies

Our thinking on ontologies, the semantic layer that ensures agentic systems act with context, consistency, and accountability.

10 - Knowledge Over Models

By 2026, many AI initiatives will stall not because models are weak, but because organisational knowledge is. As AI moves into real decisions, retrieval exposes a harder truth: information is fragmented, outdated, and defined differently across teams. Confident answers collide, and trust disappears fast.

This is where ontologies stop being theoretical and become practical. They act as a stabilising layer in the AI stack, making core concepts explicit and shared, so organisations can define risk categories, explain decisions, show consistency, demonstrate traceability, and prove where guidance does and does not apply. Instead of each system or team inferring meaning independently, the organisation decides what things mean and under what conditions they are valid.

Leaders are already talking less about which model to use and more about who owns knowledge, which definitions apply, and what happens when information is wrong or outdated. In 2026, AI won’t fail because it isn’t smart enough. It will fail because organisations don’t know what they know, or can’t trust it.

_Alec Boere, Value Partner at Valliance

_Agentic Systems

Our thinking on agentic systems, the autonomous entities augmenting human decision-making with context-aware action and accountability.

11 - Operational Agents, Not Autonomous Ones

Agentic AI will become far more practical and far less dramatic in the eyes of businesses and users. Enterprises will move away from fully autonomous agents, focusing instead on agents with clearly defined roles, boundaries, and responsibilities. The ambition moves away from replacing human judgement to supporting it in specific, repeatable ways.

Agents will actually operate within clear decision envelopes. They'll know what they can decide, when to pause, and when to escalate. Ground truth will be explicit, thresholds defined up front, handover points will be designed and not discovered through failure. This shifts agentic AI from unpredictable to dependable.

"Agentic" will stop being a buzzword and become a design pattern. Less about autonomy, more about structuring work so humans and machines complement each other. This mirrors how organisations already function: work is governed by roles, controls, and accountability. AI must fit that reality to scale.

This approach lowers the emotional barrier to adoption. Teams work more willingly with agents they understand and can intervene in. Over time, adoption becomes incremental and defensible; not a leap of faith, but confident steps toward automated operations.

_Alec Boere, Value Partner at Valliance

12 - Persistent Memory Becomes Table Stakes

All major foundation model providers ship native memory/personalisation by Q2 2026. Memory architecture becomes a differentiating capability, meaning that what users do and say will be available across chats, and over time make the tools more personalised to their specific usage.

Single-session interactions are fundamentally limited for high-value use cases. Enterprise deployment requires continuity across interactions including support, advisory, and operational contexts. Technical barriers solved: retrieval architectures mature, context windows expanded, summarisation for memory compression reliable. Competitive pressure: if one provider has it, all must follow.

_Dom Selvon, Value Partner at Valliance

_And finally…

13 - Someone will announce they've created Artificial General Intelligence, even though they haven't…

The term Artificial General Intelligence / AGI (when AI thinking becomes on par with human beings) has been discussed by major players in recent years, likely paving the way for announcing its creation by priming the market. Let’s be honest though, this will be more of a marketplace flinch test than a scientific proof of AGI creation.

This year, one of the frontier or independent companies will announce AGI creation, sparking discussions on its implications for business and society. Many will argue it's not really AGI, while others will discuss how to define and test AGI. YouTube videos, LinkedIn discussions, articles, reactions, interviews and lots of noise will follow.

Whether they've actually created AGI will become secondary as others will quickly claim to have done it too, generating more noise and investor speculation. The term itself will evolve from it’s etymological scientific origins to simply being a product name anyone can claim. Away from the noise, frontier models will likely continue with small fixes, updates, and performance improvements - and will continue to focus more on large organisational adoption or sovereign-adoption of their platforms.

_Tom Moran, Value Consultant at Valliance


Which of these will reshape your enterprise first? Let's navigate it together. And finally, to misquote The Go-Between… “the future is a foreign country”, and none of us actually know what’s coming, and are continually surprised by what actually happens!


Themes

Topics

Enterprise AI

IT Strategy

Leadership

Thought Leadership

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

_Our latest thinking
_Our latest thinking
_Our latest thinking
_Our latest thinking
_Our latest thinking
_Explore our themes
_Explore our themes
_Explore our themes
_Explore our themes

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.