What Consulting Firms Are Actually Predicting
Consulting firms are circulating a specific prediction right now: as AI agents enter the workforce at scale, the corporate management hierarchy will flatten. The argument, reported this week in business press coverage of organizational restructuring trends, is that AI agents can absorb the coordination work that middle managers historically performed. If agents can synthesize information, route tasks, and monitor outputs, the logic goes, then the layers of management built to perform those functions become redundant. This is being called the "Great Flattening." It is a provocative label, and it captures something real. But the framing misidentifies where the actual coordination problem is located.
What Middle Management Actually Does
The standard economic account of managerial hierarchy treats managers as information processors. They aggregate local knowledge, translate it upward, and push directives downward. If AI agents can do that processing faster and more cheaply, the hierarchy shrinks. This account is not wrong, but it is incomplete. Middle managers also perform a function that information-processing models systematically underweight: they carry and transmit the interpretive schemas that allow workers to respond adaptively when procedures fail. This distinction matters for evaluating the flattening prediction. Removing a node that processes information is different from removing a node that maintains adaptive competence in others.
Hatano and Inagaki (1986) drew a clean line between routine expertise and adaptive expertise. Routine expertise is procedural: it produces fast, reliable performance on familiar problems. Adaptive expertise produces flexible performance on novel problems, because the practitioner understands the structural principles behind the procedure, not just the procedure itself. Middle managers, at their best, are not information relays. They are the organizational layer that translates structural principles into context-specific guidance when the situation changes. AI agents, as currently deployed, are very good at the routine expertise side of that equation. The adaptive side is a different and largely unsolved problem.
The Coordination Gap That Flattening Creates
This matters because organizational flattening is not new. Waves of delayering in the 1990s and 2000s produced documented coordination failures that were later attributed to the loss of exactly this adaptive translation function. What is new in the current wave is the speed and the justificatory narrative. As the business press has also noted this week, some executives are pursuing layoffs specifically to signal AI-savviness to investors. This conflates two distinct decisions: substituting AI agents for genuine routine work, and reducing headcount as a performance for capital markets. When those two decisions are bundled, the organizational costs of the latter get attributed to the logic of the former, making post-hoc analysis much harder.
From an application layer communication perspective, the flattening prediction also underweights a specific kind of variance problem. Kellogg, Valentine, and Christin (2020) documented how algorithmically mediated work environments produce dramatically unequal outcomes even among workers with identical formal access to the same tools. The mechanism is not natural ability differences. It is differences in the structural schemas workers bring to the environment. If AI agents are absorbing the coordination layer, the workers who remain need more sophisticated schemas to interact with those agents effectively, not fewer. Flattening the hierarchy while increasing the complexity of the human-agent interface is a coordination problem that the "Great Flattening" framing does not address.
The Schema Problem Is the Actual Constraint
Gentner's (1983) structure-mapping theory offers a useful reframe here. Transfer of competence across novel situations depends on structural alignment between the new context and prior experience. Workers who understand why an AI agent is making a particular routing or prioritization decision, not just what that decision is, will be far better positioned to catch failures and adapt. Workers who have only procedural knowledge of how to submit inputs to an agent will fail silently when the agent operates outside its trained distribution. This is the awareness-capability gap that algorithmic literacy research has consistently documented: knowing that an algorithm governs a process does not translate into knowing how to respond when that algorithm produces a wrong or unexpected output (Gagrain, Naab, and Grub, 2024).
The consulting firms predicting the Great Flattening are describing a real organizational shift. The coordination work that justified certain managerial layers is genuinely changing as AI agents absorb routine synthesis and routing tasks. But the prediction treats the coordination problem as solved once the agents are deployed. It is not. It is relocated. The question is whether organizations building flatter structures are investing in the schema development that the remaining workforce needs to maintain adaptive performance, or whether they are simply removing one coordination layer without accounting for what replaces it. The evidence from prior delayering episodes, and from algorithmic work research, suggests the latter outcome is significantly more common.
References
Gagrain, A., Naab, T., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). W. H. Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Roger Hunt