Beyond Moats: AI Agents and the New Competitive Landscape
How AI agents are reshaping business strategies and challenging traditional concepts of competitive advantage.
Simone Cicero
From Marx’s General Intellect to AI Agents
In The Fragment on Machines (in 1857!), a section of his Grundrisse, Marx explored how machinery and automation impact capitalism and contradict its logic. He foresaw a shift where knowledge—not labor time—would become one if not the primary driver of production in society. In a recent lecture I took the freedom to translate, Italian semiologist Paolo Virno expanded on the key thesis, arguing that the General Intellect – the key concept Marx introduces – can be seen as an impersonal, collective intelligence embedded in society – not just an abstract idea but a material phenomenon, a reality that manifests through technology.
This insight becomes particularly relevant in our era of generative AI and networked systems, where collective knowledge is not just stored but transformed into an operational capability, the nature of which we’re scrambling to understand.
Virno’s interpretation suggests that we’re witnessing not just labor automation but the materialization of human cognitive capacity itself—a process where our shared knowledge becomes an active force in shaping reality. This transformation could challenge traditional notions of individual authorship and creativity: if innovations emerge from the interplay between individual minds and the collective intelligence embedded in our technical and social systems, how do we reorganize society?
This shift is reshaping the balance of power between capital and labor. Lars Doucet’s recent article Will capital-havers or capability-havers gain the most from AI? questions whether the rise of AI benefits those who hold capital (investors, infrastructure owners) or those who hold capabilities (creators, engineers, builders of AI systems) and assumes it will benefit the latter:
The technical term of art for the situation we now find ourselves in is called not having a moat. A more casual way to put it is, “the dust cloud you see rapidly forming on the horizon evidences a band of rapidly approaching competitors hell-bent on violently eating your lunch for breakfast.”
Even Sam Altman, in his recent Three Observations, acknowledges the possibility of a fundamental realignment, stating:
“The balance of power between capital and labor could easily get messed up, and this may require early intervention.”
If AI finally represents a tangible manifestation of the General Intellect in society, then economic value is shifting, and this may call into question traditional business strategies relying on moats, differentiation, and defensibility—which now appear fragile.
The Fading Power of Differentiation and Defensibility
Despite this evident transformation, many in industry pundits appear to remain attached to outdated ideas of competitive strategies. I mean, they’re not to blame and certainly most of us are confused at the moment, but the conventional playbook—based on differentiation through branding, network effects, and scale—presumes that AI will merely augment existing market structures rather than disrupt them.
If AI agents can instantly compare, switch, optimize, and integrate across services in real time (like Anthropic’s Model Context Protocol), this weakens the power of network effects, lock-in strategies, and embedded integrations for defensibility. In short: if cheap and easy ways to integrate software or services for any workflow exist, the work of developing countless integrations matters less for users choosing products.
Historically, dominant platforms have imposed their own ontologies – for example in modeling the core user experience and defining interfaces with third parties. The platform owner dictates workflows, structures, and integrations—to an ecosystem of suppliers, developers, and users. They built their power on consistent market rules:
- High integration costs: it’s impractical for developers to build for multiple competing platforms, so they choose the most used;
- Market positioning and lock-in: businesses align with leading platforms to minimize switching costs.
- Network effects: the ecosystem’s value increases as more users or developers adopt it.
But with AI-driven agents, integration costs collapse.
Agents can interpret, adapt, and mediate between systems dynamically, stripping platforms of their ability to impose proprietary workflows. This raises a critical question for product and platform strategy:
If AI agents can seamlessly connect and optimize across different services, does differentiation still matter? Can traditional moats survive?
Many industry leaders are failing to acknowledge this challenge (and maybe they’re right): they cling to ideas like AI agent marketplaces or vertical AI specializations, despite growing evidence that generalist AI models can perform across domains. The belief that companies can retain control through differentiation seems (to me) increasingly detached from reality.
The Role of Ontologies in AI Governance
If agents will dominate economic coordination, the key question is: how do we ensure accountability, explainability, and strategic oversight?
John Grant’s piece on AI-Mediated Protocols offers a compelling answer: in a world of semi-autonomous AI agents, ontologies (protocols) become the enabling constraints for coherence.
Ontologies are not just standards for data exchange and interoperability: they can represent our shared understandings of how entities relate and interact within the systems we adopt. In AI governance, ontologies will serve as frameworks that:
- Help AI agents understand their environment in ways that humans can interpret, audit, and control.
- Set rules for verification, accountability, and ethical constraints.
- Provide boundaries of agent behavior and spaces of autonomous and non-autonomous decision-making.
- Define what counts as meaningful interaction within a system.
These protocols will establish the social contracts and governance mechanisms needed for AI systems to operate responsibly within human societies.
Without shared ontologies, AI agents could generate an endless haze of conflicting, self-referential outputs, making human oversight impossible. But with well-designed and shared (among us) ontologies, agents can operate within defined enabling constraints.
Ontologies are a Political Question
This brings us to the final point: who controls these ontologies?
The discussion ties back to Illich’s idea of Limits—where constraints are tools for autonomy rather than control. The boundaries we define for AI agents will shape the power structures of the AI economy and the outcomes of its impact on the economy.
As of now, it appears that:
- No constraints may lead to a world of fragmented, non-interoperable AI agents, making human governance frankly impossible.
- Centralized constraints imposed by major platforms risk reinforcing AI monopolies and stifle the societal Knowledge Commons (a.k.a. General Intellect) impact.
- Distributed, collective, and open ontologies could democratize AI governance, ensuring coordination remains collective rather than captured by a few dominant players.
If AI is the materialization of the General Intellect, those defining its ontologies—and coordination ability—will hold immense power.
As AI systems reshape markets, labor, and capital, the interfaces they use to interconnect must be a central concern, not an afterthought, as they make interaction visible and collectively governable.
Such ontologies shouldn’t concern regulating authorities or platform owners, but – boundary-making and converging on common languages should be the concern of everyone wanting to cooperate in an AI-powered economy and have a chance in steering its outcomes.
This week’s readings point to this growing realization:
- AI is dissolving traditional advantages of scale, differentiation, and defensibility.
- AI-driven agents require new governance models, where shared ontologies serve as enabling constraints.
- Such ontologies are not neutral but inherently political—defining them means defining the power structures of the AI-driven economy.
As we enter this new landscape, the choices we make now—about who defines languages and protocols for Agentic AI systems—will determine whether AI liberates or enslaves, fosters open collaboration or entrenches monopolistic control, and shape economic coordination.
What does it mean for your organization
To stay ahead, organizations must thus prioritize engaging with open, standardized protocols—especially in AI—rather than locking themselves into closed systems. As Ben Thompson aptly puts it:
“The power of AI […] comes from knowing everything; the (perhaps doomed) response of many will be to build walls, toll gates, and marketplaces to protect and harvest the fruits of their human expeditions.”
Clinging to exclusivity and protectionism risks putting you on the wrong side of history.
The real shift isn’t just technological but cultural (almost political): from competitive moats to collaborative ecosystems. At the same time, leveraging AI agents will effectively require you to develop a deep understanding of what your organization truly does (inside its target ecosystems of value). Only then can you’ll be able design systems that work with AI, rather than against it.