The Structural Claim Being Made Right Now
Consulting firms are currently predicting what they call a "Great Flattening" of the corporate management structure, driven by the rapid deployment of AI agents into the workforce. The argument, circulating in business press this week, is straightforward: as AI agents absorb coordination and supervisory tasks previously handled by middle managers, the rationale for those managerial layers dissolves. Organizations flatten. Spans of control widen. The org chart compresses.
This is a structural argument. It is also, I think, an incomplete one. The prediction treats AI agent integration as a redistribution of task load, when the more consequential problem is a redistribution of competence requirements. Those are not the same thing.
What Flattening Actually Changes
Middle management in classical organizational theory performs two distinct functions that are frequently collapsed into one. The first is coordination, the routing of information and decisions across organizational units. The second is translation, the rendering of ambiguous strategic intent into actionable operational behavior. When organizations flatten, they generally assume AI agents can absorb the coordination function. What they rarely account for is what happens to the translation function.
This distinction maps directly onto what Hatano and Inagaki (1986) identified as the difference between routine expertise and adaptive expertise. Routine expertise handles coordination well. It follows established procedures and scales efficiently. Adaptive expertise handles translation, the capacity to recognize when established procedures do not apply and to construct new responses. The "Great Flattening" prediction assumes AI agents are replacing routine coordinators. What it ignores is that many middle managers were, in practice, performing adaptive work. Removing that layer does not eliminate the need for adaptive expertise. It relocates the burden to whoever remains.
The Awareness-Capability Gap at the Organizational Level
The KPMG survey of 100 large-company CEOs, also circulating this week, reports that executives are spending heavily on AI this year despite acknowledging bubble concerns. What is notable in the survey framing is that cybersecurity ranked as a top concern, not workforce capability gaps. This ordering of priorities reveals something important: organizations are treating AI deployment primarily as a technical governance problem rather than a competence coordination problem.
Kellogg, Valentine, and Christin (2020) documented a consistent pattern in algorithmically-mediated work environments: workers who are aware that algorithmic systems govern their outcomes do not automatically develop the behavioral competencies needed to work effectively with those systems. Awareness does not produce capability. The same logic applies at the organizational level. CEOs who are aware that AI agents are restructuring their organizations do not thereby possess accurate structural schemas for how that restructuring changes competence requirements at every level.
This is the awareness-capability gap operating at the firm level rather than the individual level. It is, if anything, more consequential at that scale because the misdiagnosis gets institutionalized in hiring plans, training budgets, and reporting structures before anyone has tested whether the underlying assumptions hold.
Flattening as Folk Theory
The "Great Flattening" prediction has the characteristics of what I would call an organizational folk theory, an impression of structural dynamics based on surface-level pattern recognition rather than accurate structural understanding. It observes that AI agents can coordinate tasks and concludes that coordination layers become redundant. That inference feels sound because it has a plausible causal story attached to it.
Gentner's (1983) structure-mapping theory is useful here. Folk theories map objects to objects: AI agent replaces coordinator, therefore coordinator role disappears. Schema-level understanding maps relational structures to relational structures: coordination function exists within a system of interdependent competence requirements, and removing one node changes the load distribution across all remaining nodes, including nodes not yet identified as load-bearing.
Organizations running the Great Flattening playbook are operating from the object-mapping version. The relational version would ask: when AI agents absorb coordination load, which remaining human roles absorb increased adaptive load, and do those roles currently have the capacity to carry it? That question is not appearing prominently in the coverage I have seen this week.
What the Prediction Gets Wrong About Org Design
Rahman's (2021) analysis of platform-mediated work in Administrative Science Quarterly introduced the concept of the invisible cage, the way algorithmic systems constrain worker behavior through opacity rather than explicit rule-setting. The organizational parallel for AI agent deployment is that the new structure will not announce its competence requirements explicitly. Workers remaining after flattening will discover them through performance failures, which is an expensive and slow way to map structural constraints.
If organizations are genuinely entering a period of rapid AI agent integration, the relevant organizational design question is not how many management layers to remove. It is how to build schema-level understanding of AI-mediated coordination into the roles that survive the restructuring. That is a harder problem than drawing a flatter org chart, and it is the one currently getting the least attention.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). W. H. Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt