The Competition Frame and Its Structural Blind Spot
A recent piece circulating in business media makes a claim that sounds intuitive on its surface: AI is not replacing workers outright, it is forcing them to compete more intensely against each other for the remaining positions that require demonstrably "human" value. The framing is sympathetic, even accurate in a surface-level empirical sense. But it stops exactly where the analysis needs to begin. The competition frame assumes that workers possess, or can readily acquire, the competencies required to win that competition. The research suggests otherwise, and the gap between those two assumptions is where the real organizational story lives.
Awareness Without Capability: The Known Problem That Organizations Keep Ignoring
The workers described in this narrative are not passive. They are scrambling, upskilling, and positioning. What the coverage documents, without fully theorizing, is a workforce exhibiting high algorithmic awareness and persistently poor algorithmic capability. This is precisely what algorithmic literacy research has catalogued. Gagrain, Naab, and Grub (2024) distinguish between awareness of algorithmic systems and the functional capacity to adapt behavior based on that awareness. The distinction matters because organizations and workers themselves tend to conflate the two. Completing an AI literacy module, reading about prompt engineering, and understanding that large language models exist does not produce improved performance outcomes in AI-mediated work environments.
Kellogg, Valentine, and Christin (2020) make a related point about workers in algorithmically governed workplaces: awareness of algorithmic constraints does not translate into effective navigation of those constraints. Workers develop what I would call folk theories, informal, individual-level impressions of how the system works, rather than accurate structural schemas. Folk theories produce locally adaptive behavior that fails when the underlying system changes, which in current AI deployment timelines means they fail frequently.
The Competition Accelerates the Wrong Kind of Learning
Here is the counterintuitive organizational problem embedded in this news story. Competitive pressure, by design, incentivizes fast, specific, procedural adaptation. Workers under threat of displacement do not have the luxury of developing deep structural understanding of AI systems. They adopt the fastest available heuristic: imitate whoever appears to be winning, acquire the credential most recently cited in a job posting, or optimize for the visible metric their employer has named as the proxy for AI competence. This is exactly the pattern Hatano and Inagaki (1986) described as routine expertise, procedural fluency that performs well in stable, predictable conditions and degrades precisely when conditions shift.
The workers described as "scrambling" in this coverage are being pushed by institutional incentives toward routine expertise at the moment when the environment most demands adaptive expertise. The competition frame accelerates this dynamic by shortening time horizons and increasing the perceived cost of investing in structural understanding that does not produce immediate, legible results.
What Organizations Are Actually Coordinating
The deeper organizational question here is about coordination failure, not individual competence. Schor et al. (2020) documented how platform and algorithmically mediated work environments produce dependence and precarity in part because the coordination mechanisms governing performance are opaque to the workers being coordinated. When the rules of competition are structurally unclear, workers cannot invest rationally in the competencies that would actually improve their position. They invest in the competencies that are most visible and most credentialed, which are frequently not the same thing.
Sundar (2020) introduces the concept of machine agency as a variable that reshapes human behavior in AI-mediated environments. When workers perceive AI as an agent with evaluative authority over their continued employment, the behavioral response is not curiosity or structural inquiry. It is performance directed at the perceived audience of that agent, which is usually the employer, not the system itself. This produces a second-order coordination problem: workers are not learning to work with AI systems, they are learning to look like they work well with AI systems, a distinction that matters enormously for organizational capability development.
The Structural Prediction
If the competition frame continues to dominate how organizations narrate AI integration, and if institutional incentives continue to reward procedural credentialing over structural schema development, the workforce displacement story will not resolve in the direction most coverage implies. The workers who survive will not be the ones who competed hardest under the existing rules. They will be the ones who developed accurate mental models of how these systems coordinate behavior, which is a form of learning that competitive pressure actively discourages. That is not a prediction about technology. It is a prediction about how organizations systematically invest in the wrong kind of expertise at exactly the moment when the stakes for getting it right are highest.
References
Gagrain, A., Naab, T., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5), 833-861.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt