In 1975, Frederick Brooks published a slim, elegant book about why large software projects fail. The Mythical Man-Month became a classic not because it solved the problem, but because it named something that everyone in the field had experienced and nobody had articulated properly: adding people to a late project makes it later. The reason, Brooks argued, was deceptively mathematical. Every person you add to a team creates new communication channels, and the number of those channels grows not linearly but quadratically, following the formula n(n−1)/2.1 A team of five people has ten channels of communication. A team of ten has forty-five. A team of twenty has one hundred and ninety. By the time you reach fifty people, you are managing over twelve hundred potential lines of interaction.
Brooks was writing about software, but the principle he described has nothing to do with code. It is a fundamental property of interconnected systems, and it operates with the same relentless arithmetic in law firms, hospitals, consulting partnerships, government agencies, and multinational corporations. The moment a system grows beyond a certain threshold, the internal connections among its parts begin to overwhelm the capacity of anyone to understand, let alone manage, the whole.
This is where most people reach for the obvious solution: break it apart. Divide the team. Split the product. Modularize the organization. Create clear boundaries, defined responsibilities, and manageable units. It sounds right. It feels right. And it is, in fact, the most viable response to exploding complexity. But here is the part that catches nearly everyone off guard: decomposition works, and it also creates an entirely new category of problems. The cure does not eliminate the disease. It transforms it.
Understanding this dynamic properly requires going beyond management clichés about "silos" and "alignment." There is real intellectual architecture behind it, developed across decades by people thinking seriously about how complex systems behave. And right now, as organizations rush to integrate AI into their operations, that architecture is more relevant than it has been in years.
On hierarchies and modularization
The Brooks formula is a starting point, but its implications go further than most people appreciate. The problem is not just that communication channels multiply. The problem is that they multiply in a way that is almost invisible until the system reaches a tipping point. Unlike human communication, many of these interactions are mediated or hierarchical.
Consider a consulting firm that grows from eight partners to sixteen. On paper, the firm has doubled. In practice, the number of bilateral relationships has gone from twenty-eight to one hundred and twenty. The amount of coordination work has more than quadrupled, but the firm's revenue has only doubled. That gap between the linear growth of resources and the superlinear growth of internal complexity is where organizations lose their footing. Profit margins shrink, decisions slow down, quality becomes uneven, and people start feeling that something is wrong without being able to pinpoint what it is.
Herbert Simon, the Nobel laureate and polymath who spent most of his career studying how people and organizations make decisions, saw this problem with great clarity. In 1962, he published a paper called The Architecture of Complexity2, in which he argued that all durable complex systems share a structural feature: they are hierarchical. Not in the narrow sense of bosses and subordinates, but in the deeper sense that they are composed of semi-independent subsystems, which are themselves composed of smaller subsystems, all the way down to whatever elementary components you care to define.
The reason, Simon proposed, is evolutionary. Complex systems that are not hierarchically organized are unlikely to persist long enough to become complex. He illustrated this with a parable about two watchmakers, Hora and Tempus, both building watches of equal complexity from a thousand parts. Tempus assembled his watches as a continuous sequence, so that any interruption meant starting over. Hora designed his watches in stable subassemblies of about ten pieces each, which could be combined into larger assemblies.3 When interruptions came - and they always came - Hora lost only the work on the subassembly he was currently building. Tempus lost everything. Hora prospered. Tempus went out of business.
The lesson is structural, and it applies far beyond watchmaking. Systems that can be decomposed into semi-autonomous modules are more robust, more adaptable, and more comprehensible than systems that cannot. Simon called this property "near-decomposability" - a term that deserves more attention than it usually gets, because the qualifier "near" is doing most of the work. The modules are not fully independent. They interact. But the interactions within each module are significantly stronger and more frequent than the interactions between modules.
This is what makes modularization the right response to growing complexity. You are not eliminating interdependence. You are concentrating it. You are creating zones of high internal coherence and relatively low external coupling. Within each zone, things can be managed. Between zones, the connections are controlled and minimal. At least, that is the theory.
The other side of the trade
Here is where practice diverges from theory in ways that matter enormously.
Martin Fowler, one of the most influential thinkers in software architecture, has spent years cataloguing what actually happens when organizations adopt modular architectures like microservices.4 His assessment is characteristically honest. The benefits are real: modular boundaries enforce discipline, independent deployment reduces bottlenecks, and teams gain autonomy. But so are the costs. Distributed systems are harder to program. Remote calls are slow and unreliable. Data consistency across modules becomes a genuine headache. And the operational complexity of managing dozens or hundreds of loosely coupled services requires a level of organizational maturity that many teams do not have.
What Fowler describes in the context of software is a pattern that plays out identically in organizational design. Take a growing professional services firm that decides to reorganize from a single pool of consultants into three specialized practice groups. The motivation is sound: each group can develop deeper expertise, move faster on client work, and operate with less internal friction. But the reorganization immediately generates a new set of demands. Who coordinates when a client engagement requires capabilities from two practice groups? How are shared resources allocated? What happens when priorities conflict? Who maintains the knowledge that used to flow freely across the undivided firm?
These are not incidental problems. They are the structural consequences of modularization itself. Every boundary you draw creates two things simultaneously: clarity about what is inside, and a new interface that must be actively managed. The boundary reduces one kind of complexity - the overwhelming tangle of internal connections - while introducing another kind: the overhead of coordinating across boundaries.
Melvin Conway observed a version of this as early as 1968, when he published what has since become known as Conway's Law: organizations that design systems are constrained to produce designs that mirror their own communication structures.5 The implication cuts both ways. If you want a modular product, you need modular teams. But modular teams will also produce modular products whether that is what you intended or not, and the seams in the product will track the seams in the organization, including the places where communication is weak.
Conway's insight is important because it makes visible something that is easy to overlook: modularization is not a purely rational design decision. It is shaped by the social and political realities of the organization undertaking it. The boundaries you draw are not just technical or functional. They reflect who talks to whom, who trusts whom, and where power sits. When those boundaries are misaligned with the actual flow of work, the result is not elegant modularity but fragmentation.
The zone between too much and too little
So, we have a genuine dilemma. A growing system that is not decomposed becomes unmanageable. A system that is over-decomposed drowns in its own interfaces. The optimum lies somewhere in between, and finding it requires paying attention to factors that are rarely discussed in the same conversation.
The first factor is the nature of the dependencies. Not all connections between components carry the same weight. Some are structural and stable - the finance team always needs data from operations. Others are contingent and evolving - two product teams happen to be working on features that interact this quarter. Modularization works best when it respects the difference, drawing hard boundaries around stable clusters of tight dependency and leaving softer, more flexible connections for everything else.
The second factor is what you might call the cognitive budget of the organization. Every boundary requires people to maintain a mental model not only of their own domain but also of the interfaces to adjacent domains. There is a limit to how many interfaces any individual or team can track before the overhead eats into productive work. Simon understood this: his concept of bounded rationality6 - the idea that human decision-making is constrained by available information, cognitive limitations, and finite time - applies directly to how organizations process their own internal complexity.
The third factor is the rate of change. A system that operates in a stable environment can tolerate heavier modularization, because the interfaces between modules do not need to be renegotiated often. A system that must adapt quickly to external change needs lighter, more permeable boundaries, because rigid module structures slow down the organization's ability to reconfigure.
These three factors interact in ways that make cookie-cutter solutions useless. A holding company managing a portfolio of independent businesses can modularize heavily, because the dependencies between portfolio companies are minimal and the interfaces are mostly financial. A hospital cannot modularize in the same way, because the dependencies between emergency medicine, surgery, radiology, and pharmacy are dense, fast-moving, and life-critical. The optimal degree of decomposition is not a universal constant. It is a function of the specific system's dependency structure, cognitive capacity, and environmental volatility.
Enter AI, and watch the paradox accelerate
This is the point where the conversation would normally end with a plea for thoughtful organizational design and a respect for trade-offs. But there is a new variable in the equation, and it changes the dynamics considerably.
When organizations introduce AI into their operations, they are not simply adding a tool. They are introducing a new category of actor into a system that was already struggling with coordination. And the complexity paradox, far from being resolved by AI, is intensified by it in ways that most leaders have not yet fully grasped.
Start with the most straightforward effect. AI tools multiply the number of nodes in the system. Before AI, Brooks's formula applied to human actors: every person added to a team created new communication channels. Now add AI agents, co-pilots, automated workflows, and machine-learning systems that generate outputs consumed by other parts of the organization. Each of these represents a new node with its own interfaces, its own data requirements, its own failure modes, and its own need for oversight. McKinsey, in its recent work on agentic organizations, describes a horizon in which realizing ROI requires organizations to activate thousands of AI agents enterprise-wide.7 Apply Brooks's arithmetic to that landscape and the number of potential interaction channels becomes staggering. The coordination overhead does not merely grow; it changes in character, because some of the actors in the network are no longer human, and the handoff protocols between human judgment and machine output are still being invented.
Then consider the modularization response. Organizations that are thoughtful about AI adoption instinctively reach for the same structural answer that Simon and Brooks would recommend: decompose the problem. Create specialized AI agents for specific functions. Give each agent a bounded domain. Define APIs and protocols for how agents interact with each other and with human teams. This is sound engineering, and it mirrors the logic of microservices architecture applied to organizational intelligence. But it also triggers the same paradox.
Every specialized AI agent you deploy creates a new interface that needs to be managed. Who ensures that the output of the marketing agent is consistent with the commitments made by the sales agent? Who arbitrates when the risk-assessment agent and the revenue-optimization agent reach contradictory conclusions? Who monitors the drift in an agent's behavior over time as its training data evolves? The answer, increasingly, is more agents: orchestration agents, guardian agents, monitoring agents. MIT Sloan and BCG, in their 2025 research on the emerging agentic enterprise, describe organizations building layered systems of agents that supervise other agents.8 This is modularization generating new interfaces generating new modules generating new interfaces - the paradox running at machine speed.
What makes this particularly treacherous is that AI adoption tends to happen unevenly across an organization. Research confirms a pattern that anyone working with large enterprises will recognize: individual employees and forward-leaning teams adopt AI tools rapidly, while governance structures, interface protocols, and cross-functional coordination frameworks lag far behind. The result is a version of Conway's Law with a new twist. The communication structure of the organization is no longer shaped only by human relationships. It is shaped by a patchwork of human teams, AI tools adopted bottom-up without central coordination, and enterprise AI systems deployed top-down with unclear boundaries. The system architecture that emerges from this communication structure is, predictably, incoherent.
And here we arrive at the deeper problem, the one that connects the complexity paradox to the psychological dynamics of how leaders actually respond to this situation.
Superlinear growth and collective denial
If the principles of the complexity paradox are well understood - and they have been articulated by Brooks, Simon, Conway, and Fowler, among others - why do organizations keep making the same mistakes? And why is AI making this worse rather than better?
Part of the answer is perceptual. The superlinear growth of complexity is genuinely hard to see. Linear growth is intuitive. You hire ten people, you have ten more people. You deploy five AI-agents, you have five more AI agents. But the combinatorial explosion of internal connections that accompanies that growth is not something that shows up on a dashboard or in a quarterly review. By the time the symptoms become unmistakable - slowing decisions, rising coordination costs, an uneasy sense that nobody quite has the full picture - the system is already deep into the zone where intervention is both necessary and difficult.
Part of the answer is emotional. Modularization is often experienced as a kind of loss. When a firm splits into practice groups, or a department divides into semi-autonomous teams, people who used to share a common identity now belong to separate units. There is a grief in that, however rational the restructuring may be. The resistance is not irrational. It is a natural response to the dissolution of familiar bonds and shared context. People sense, often correctly, that something valuable will be lost in the transition - the informal knowledge exchange, the serendipitous collaboration, the sense of belonging to a whole rather than a part.
AI amplifies this emotional dimension in a specific way. When the modularization involves not just splitting human teams but also inserting non-human actors into the workflow, it touches something that goes beyond structural rearrangement. It raises questions about professional identity, expertise, and the value of human judgment that most organizations are poorly equipped to discuss openly. Executives I work with rarely frame their resistance in these terms. They talk about "implementation challenges" and "change management." But beneath the rational language, there is often a deeper anxiety: if AI can handle parts of what I do, what is my role in this new structure? That question is uncomfortable enough for an individual. Multiply it across an entire leadership team and you get a system-wide defense mechanism that manifests as endless pilot programs, postponed decisions, and a peculiar form of magical thinking in which AI is simultaneously the solution to everything and the responsibility of nobody.
And part of the answer is a kind of collective denial about what comes next. Leaders who push through AI-driven reorganizations often underestimate the new problems these will create, because acknowledging those problems would undermine the narrative that drove the investment. So, the governance model that the new structure requires gets designed too late, or not at all. The interface protocols between human teams and AI systems are left vague. The coordination costs are treated as teething problems rather than structural features. When the predictable difficulties appear - duplicated functions, inconsistent outputs, slower cross-unit decision-making - they are met with surprise and frustration rather than with the sober recognition that this is exactly what the complexity paradox predicts.
Working with the paradox in an AI world
There is a temptation, having laid out the problem in these terms, to conclude that the situation is hopeless. It is not. But it does require accepting that the complexity paradox cannot be resolved, only managed, and that management in this case means something more nuanced than deploying the next wave of technology.
The first practical implication is that AI adoption should follow the same logic as any other form of modularization: deliberate, incremental, and respectful of the interfaces it creates. The instinct to deploy AI across the entire organization simultaneously - to leap from pilot to transformation in a single stride - almost always overshoots the mark. Better to identify the specific clusters of activity where AI adds genuine value with manageable interface costs, and to let the boundaries between human and machine work stabilise before extending the pattern further.
This incremental approach reflects what Dave Snowden has argued for decades in his work on complex systems: complex domains call for probe–sense–respond, not sense–analyze–respond.9 The right posture is safe-to-fail experimentation, not comprehensive advance planning. Rather than designing the entire AI integration architecture centrally, the method identifies which coordination points are most malleable - where change can happen with lowest organizational cost - and lets solutions emerge from small-scale experiments first. Snowden's work emphasizes continuous monitoring of what actually emerges from these experiments, then amplifying the patterns that work, rather than trying to anticipate all interface problems in advance. Organizations that attempt massive AI transformations fail because they try to solve the coordination problem theoretically first; the most adaptive instead manage it empirically, through monitored micro-experiments that feed into larger patterns.
The second implication is that interfaces deserve as much design attention as the capabilities themselves. This is where most organizations underinvest, especially with AI. The internal workings of an AI system tend to absorb enormous attention: model selection, training data, accuracy metrics, prompt engineering. But the spaces between the AI and the rest of the organization - the protocols for when a human overrides a machine recommendation, the feedback loops that keep an agent's outputs aligned with changing business context, the escalation paths when something goes wrong - are often left to emerge organically, which usually means they emerge badly.
The third implication is that the right level of AI integration changes over time and needs to be actively revisited. An architecture that made sense when you had three AI tools will be wrong when you have thirty. The technology industry has already learned this lesson the hard way with microservices, as companies that aggressively decomposed their software into hundreds of independently deployable services are now selectively reconsolidating. The same pattern will play out with AI agents: the pendulum will swing between too many specialized agents with too much coordination overhead, and too few general-purpose agents with too little precision.
And the fourth implication, perhaps the most important one, is that leaders need to develop a tolerance for the irreducible messiness of the trade-off. The complexity paradox means that you will always be managing some mixture of internal tangle and interface overhead, whether the actors in your system are human, machine, or both. There is no configuration in which both problems disappear. The skill is in reading the signals - the delays that indicate too much coupling, the duplication that indicates too much separation, the drift that indicates human and AI systems are quietly diverging - and adjusting continuously, without the fantasy that there is a final, stable architecture waiting to be found.
Simon's watchmaker Hora did not build a perfect watch. He built one that could survive interruptions. That is the more realistic ambition for leaders navigating AI integration: not to eliminate complexity, but to arrange it in a way that the system can absorb disturbance without collapsing. AI does not solve coordination complexity. It redistributes and accelerates it. AI does not change the underlying logic of the complexity paradox. It raises the stakes, accelerates the cycle, and demands that leaders engage with the trade-off at a pace and a level of nuance that most organizational cultures are not yet equipped for.
The organizations that will navigate this well are not the ones with the most sophisticated AI. They are the ones whose leaders understand that every new capability they introduce is also a new interface they must manage, and who have the patience and the intellectual honesty to keep adjusting the balance rather than pretending they have found the answer.
Dr Alexander Fruehmann is an executive coach, board advisor, and trained psychotherapist. He is a co-founder at Singularity.Inc, an AI strategy consultancy, based in Austria and Germany, and the founder of The Legal Minds Group.