The New Language of Workforce Reduction
Something shifted in how technology executives communicate layoffs. Block's Jack Dorsey and Atlassian's Mike Cannon-Brookes have both framed recent workforce reductions not as responses to economic headwinds but as deliberate organizational restructuring driven by AI capability absorption. Atlassian announced cuts affecting roughly 5% of its workforce earlier this year, with leadership explicitly linking the decision to AI tooling that now handles work previously requiring human labor. Block followed a similar rhetorical pattern. These are not downturn announcements. They are, as the framing increasingly suggests, declarations about what kinds of human competence organizations still need.
This framing deserves more scrutiny than it typically receives. The argument these executives are making is structural: AI has raised the productivity floor, which means headcount requirements fall even as output targets remain constant or increase. That argument may be partially correct. But it conceals a competence assumption that I think is poorly theorized in both corporate strategy and organizational scholarship.
The Competence Inversion Hidden in the Manifesto
Classical coordination theory, whether through Chandler's administrative hierarchy or Williamson's transaction cost framework, assumes that organizations hire workers whose competencies are known and stable prior to deployment. The organization acquires competence from the labor market. What the Block and Atlassian announcements implicitly claim is that this assumption now inverts: AI tools bring the competence, and human workers must adapt around them. The worker is no longer the primary competence-bearer; the platform is.
This is precisely the inversion that the Algorithmic Literacy Coordination framework is designed to analyze. Kellogg, Valentine, and Christin (2020) documented how algorithmic systems at work fundamentally alter the relationship between worker agency and organizational outcomes, often in ways that make traditional performance evaluation frameworks misleading. When Atlassian says AI is "doing the work," they are describing a system where competence is increasingly endogenous to the platform rather than imported through hiring. The remaining human workers are not simply doing less work; they are being asked to coordinate with a system whose operating logic is opaque to most of them.
The Awareness-Capability Gap in the Restructured Firm
Here is where the manifesto framing becomes analytically dangerous. Organizations that reduce headcount on the premise that AI absorbs former human functions are implicitly betting that their remaining workers can effectively direct, audit, and extend those AI functions. But the research on algorithmic literacy suggests this bet is underdeveloped. Goggins, Naab, and Grub (2024) found that media workers with sustained platform exposure develop awareness of algorithmic structures without corresponding gains in their ability to manipulate or redirect those structures. Awareness does not produce capability.
The workers who remain after an AI-justified restructuring face exactly this problem. They know AI is doing more. They can observe outputs. But whether they can identify when outputs are wrong, when the system is operating outside its reliable range, or when human intervention would improve rather than degrade results - that is a different competence entirely. Hatano and Inagaki (1986) distinguished routine expertise, which is procedure-following under stable conditions, from adaptive expertise, which involves applying underlying principles to novel situations. AI-era restructuring demands adaptive expertise from workers while organizations continue to train and evaluate for routine expertise. The manifesto framing does not address this gap. It assumes the gap does not exist.
What the Elderly DoorDash Driver and the Atlassian Layoff Share
This week also produced a different kind of story: a 78-year-old man working as a DoorDash driver to cover his wife's medical bills, whose situation went viral and generated nearly $500,000 in crowdfunded support. The juxtaposition with the Atlassian and Block announcements is instructive, not sentimentally but structurally. Schor et al. (2020) documented how platform labor disproportionately absorbs workers who have been expelled from primary labor markets, including older workers for whom re-entry through conventional hiring is effectively closed. The AI manifesto layoffs and the gig economy's absorption of displaced workers are not separate phenomena. They are upstream and downstream of the same structural shift.
Rahman (2021) described this dynamic as the invisible cage: workers who depend on platforms lack the structural schema to contest or even clearly perceive the conditions of their dependence. The 78-year-old DoorDash driver is not failing to compete on a level playing field. He is operating under algorithmic dispatch logic that he cannot audit, in a labor market whose restructuring was narrated as progress by the same executives now writing AI manifestos about efficiency gains.
The Organizational Theory We Actually Need
What the Block and Atlassian announcements reveal is that organizations are making large structural bets about competence without an adequate theory of how that competence develops, transfers, or fails under AI-mediated conditions. The manifestos are confident. The theoretical grounding is thin. Hancock, Naaman, and Levy (2020) argued that AI-mediated communication introduces systematic asymmetries between the parties who design algorithmic systems and those who work within them. Corporate restructuring announcements framed as AI-era adaptation are a form of that communication, and the asymmetry of information between executive framing and worker experience is precisely where organizational theory needs to focus next.
References
Goggins, S., Naab, T., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society. https://doi.org/10.1177/14614448241229
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100. https://doi.org/10.1093/jcmc/zmz022
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410. https://doi.org/10.5465/annals.2018.0174
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988. https://doi.org/10.1177/00018392211010118
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861. https://doi.org/10.1007/s11186-020-09408-y
Roger Hunt