Agents of Chaos: The AI Hyperbole Risk

No one can agree on what an AI agent is, but everyone’s selling one.
Buzzwords often do more harm than good and Agent is the latest in this list. As the article linked below highlights, the people and companies who create and fund Artificial Intelligence are unable to form consensus on the definition of Agent. How is it then that consultants, integrators and software companies are able to package and sell Agents?
No One Knows What The Hell An AI Agent Is
The tech industry continues to struggle with defining AI agents, as companies race to release products with contradictory definitions and capabilities under the same label.
Most Agents aren’t much more than workflow orchestration tools with an interface bolted on, and marketing them as such is a disservice to the industry today and in the future.
Anthropic, makers of Claude.ai, provide a good frame of reference to define agents
Agents have more autonomy which makes them inherently less predictable. While this freedom is wonderful for scope-defined research or calculating routes based on changing parameters in a fixed simulation, it is less than ideal for automation, reproducibility, customer-facing interfaces or where adherence to vision or values is important.
So, why the push for agents?
Table Stakes: AI Everywhere.
ChatGPT, Claude, Grok, Gemini, Meta AI, Mistral, Qwen, DeepSeek, everyone has an AI. Some are paid, others are free. Most are proprietary while the models originating from China are mostly free and/or open source which is refreshing to see as it democratises access and unlocks deeper innovation. The implication is commodification with little monetary value accrual.
Generative AI: Maturing, Multifaceted Technology.
AI trained on a large corpus of data is capable of mesmerising outputs. We marvel because generative AI bridges skills gaps immediately and is affordable for most people, historically two massive barriers to entry.
Large Language Models (LLM’s) have changed much of the writing one encounters. People use AI to write everything from essays to cover letters. You can use Suno.com to create music, Midjourney to create images, Higgsfield.AI to transform created images into jaw-dropping video. These are tremendous technological accomplishments and a massive neutraliser for people who didn’t have the training or opportunities to develop and refine their own skills. The fidelity and quality of these tools will only continue to improve and result in stunning work at a fraction of the cost when harnessed correctly.
Technology is not without its risk vectors though. When people use LLM’s to replace their thinking instead of improving it, mundane artefacts emerge. When music is uploaded for the sake of capturing streaming dollars, it dulls collective standards.
Deezer Reveals 18% of All New Music Uploaded to Streaming is Fully AI-Generated
Recent analysis shows a significant increase in AI-generated music content across streaming platforms, raising questions about the future of music creation and artist compensation.
Art - writing included - is creative energy shaped by emotion, philosophy and lived experience guided by intention. While some AI-generated outputs meticulously crafted by user inputs are truly excellent, most users are not incentivised by qualitative outcomes; their output which unfortunately constitutes a significant portion of the content available online, lacks the depth, coherence and intentionality of artistic expression.
This compromised artistic integrity creates standard deviations between intention and outcome which are a form of emergent chaos, more damaging than noise as they risk stagnation and the standardisation of what will eventually become mediocrity.
The Rise of the “Agent” Label
The Agent label today falls into two categories.
Guided Execution:
- Scoped applications of technology where AI equipped with tools works semi-autonomously to produce outputs, examples of this are Deep Research and instances of agent-assisted coding like Codex, Claude Code and Cursor. The quality and capacity of the latter is debatable as the technology is still in an early stage of development and like generative AI, will continue to improve and should be harnessed to augment human effort and amplify productivity.
Sensationalism:
- Lack of Authenticity: As technology commodifies at the base layers, it loses its unique value proposition. Similar to email providers, most users find AI tools interchangeable as they all serve the same basic functions. Entities with no authentic value to offer or layer on resort to sensationalism to attract attention and generate interest.
- Redundancy: Free Generative AI can provide strategy, visual and written content, planning and ideas which were previously gated by access and/or knowledge. Organizations and individuals who stand to lose revenue are incentivised to maximise profit while they can.
Sensationalism is a symptom of mediocrity. In this sense the term Agent is akin to ostentatious thumbnails and clickbait titles. An over-the-top thumbnail doesn’t lead to high-quality content any more than a social post designed to go viral enriches its audience. Worse, it attracts opportunists and grifters both of whom add noise and obfuscate signal. Entities selling agents stand to gain from doing so by capitalising on industry-wide momentum, or wanting to differentiate their undifferentiated and often unnecessary offerings.
As emerging technology changes the landscape, new solutions must emerge and take centre stage and legacy systems must recede, otherwise stagnation ensues. Not their algorithms, we need new surfacing mechanisms which don’t prioritise the provider; ones which scan for signal while filtering for incompatibility and undesirability.
Anomaly’s Stance: What True Agency Requires
Our stance is that AI Agents are a post-personalisation paradigm. True Agency in the context of Artificial Intelligence requires:
Trust: a less abstract representation of which is predictability. Today’s outputs are unreliable, at times undesirable and unpredictable. If an undesirable output is predictable, then the present solution represents design failure or incompetence.
Personalisation: Understanding goals, values and context. Generic models cannot serve as effective agents because they lack a persistent contextual understanding, which is a combination of Long-term memory, Deep Contextual Awareness, Adaptive learning.
Security and Safety: Agent contributions must be structured to be additive, safe and private. Current architecture and prevalent business models are conceptually incompatible with this ethos.
Developing these capabilities is currently costly, resource-intensive, and difficult to scale .
Training or fine-tuning models per user is computationally expensive. Maintaining stateful interactions and memory adds complexity and most companies aren’t ready to invest in the infrastructure needed for real agent-like systems when the technology arc hasn’t reached maturity.
Even in the B2B context, it is a reenactment of the Cloud-based vs on-prem debate as organisations require personalisation - knowing their structures, workflows, repositories - for the technology to be truly effective, and this has security and dependency implications.
Trustworthy AI Agents will require true personalisation and a rethink of monetization and ownership models.
The Real Costs of the Hype
The rush to label everything as an "AI Agent" risks:
- Lowering expectations for what real agency should be.
- Misleading Investors and wasting funds.
- Setting negative precedents and systemic obstacles for aspiring entrepreneurs in the future
- Encouraging dependency on systems that lack accountability.
- Distracting from the harder, more important work of building genuinely helpful, consistent, and ethical AI assistants.
- Reconstructing old paradigms. Chatbots risk recreating the IVR/Call Centre loop of frustration.
The Road Ahead:
Agents will ideally coexist as collaborators, personalized in understanding, tempered in ability. For this vision to come to fruition, we must channel our energy towards:
- Adaptive learning in systems design
- Intent modelling in interface design
- Protocols that transcend privacy limitations
- Architectures that support persistence and emergent capabilities
At Anomaly, we believe that personalization is the key unlock for Artificial Intelligence and that this is the fulcrum where the industry will ultimately converge. This belief forms the foundation of our thesis, and it guides our work in Long-term Memory, Context-Aware Models, User-Specific Tuning Methodologies, and Ethical Frameworks, so that when agents do emerge, they enable humans to build a safer, empowered and more equitable future.
AI is broadly available infrastructure. The direction and depth of our collective building efforts will shape future development. It must therefore be ethical, resilient, performant and focus on ingenuity as new paradigms mandate expanding our perspectives to identify, accommodate and address increasing opportunities and vulnerabilities.