Sam Altman stated this week that even his own role as CEO is not safe from AI displacement, suggesting that artificial intelligence will soon perform the work of a chief executive better than, in his words, "certainly me." This is a remarkable claim, not because of its humility, but because of what it reveals about how organizational theorists should think about the relationship between institutional authority and epistemic competence. When the person at the top of a frontier AI organization publicly declares that the system he oversees will soon outperform him at his own job, we are no longer debating productivity augmentation. We are confronting a structural question about what organizations are actually coordinating when they coordinate human labor.
The Competence Horizon and Who Sets It
Classical organizational theory largely assumes that authority tracks competence, at least in functional terms. Weber's bureaucratic ideal, for instance, pairs formal position with technical expertise. What Altman's statement implies is a near-term future where that coupling breaks down entirely, not through incompetence in the traditional sense, but through systematic obsolescence. This is distinct from the familiar argument that AI will automate routine tasks. Altman is describing the automation of adaptive judgment, the kind of reasoning that Hatano and Inagaki (1986) associated with adaptive expertise and distinguished sharply from the procedural pattern-matching of routine expertise. If adaptive expertise is the next target, the theoretical scaffolding most organizational scholars rely on to explain authority and coordination needs revision.
The Awareness-Capability Gap Scales Upward
One pattern I return to repeatedly in my dissertation research is what I call the awareness-capability gap: the consistent finding that workers who develop awareness of algorithmic systems do not automatically improve their performance within those systems (Kellogg, Valentine, & Christin, 2020). Knowing that an algorithm governs your outcomes is not the same as knowing how to respond effectively to it. What Altman's comments introduce is a version of this gap operating at the executive level. Boards, investors, and regulators are developing awareness that AI can perform managerial work. The concurrent news that Anthropic just raised $30 billion at a $380 billion valuation is direct evidence of this awareness reaching institutional investors. But awareness of AI capability is not the same as organizational competence in deploying, governing, or even evaluating that capability. The capital is moving faster than the schemas required to use it well.
The "Everybody Has Potential" Collapse
A separate piece of reporting this week framed the current moment in tech labor as the end of the "everybody has potential" era, arguing that workers are about to get caught sleepwalking as the culture shifts beneath them. This framing, while culturally oriented, points to something theoretically significant. The variance puzzle I examine in the Algorithmic Literacy Coordination framework holds that platform workers with identical access show dramatically different outcomes, driven by power-law amplification of initial differences (Schor et al., 2020). What the current moment suggests is that this variance dynamic is migrating from the gig economy into knowledge work at large. The relevant question is no longer whether workers have access to AI tools, but whether they possess what Gentner (1983) would call structural schemas, accurate internal representations of how these systems actually work, rather than folk theories assembled from surface-level exposure.
What This Means for Organizational Governance
The governance implication is direct. If Altman is correct, even approximately, then organizations face a structural problem that no amount of procedural documentation can solve. Procedure-based training, which tells workers what to do when a known situation arises, is precisely the form of expertise that fails in novel contexts (Hatano & Inagaki, 1986). Organizations investing in procedural AI training are building routines that will be invalidated by the next model release cycle. What they need instead are governance structures oriented around schema induction: training people to understand the structural logic of AI systems, not just their current surface behaviors. Rahman (2021) describes how algorithmic management creates an invisible cage that constrains worker autonomy through opacity. The answer to that opacity is not more procedures. It is more accurate structural understanding.
The Theoretical Stakes
Altman's statement is easy to read as a performance of modesty or a marketing signal for OpenAI's roadmap. I think that reading is too convenient. The more productive interpretation is that he has identified a genuine organizational problem that his own company has not solved: the question of how human authority structures remain coherent when the epistemic foundations that justify them are systematically eroded. That is not a technology question. It is a coordination theory question, and the field does not yet have a satisfying answer.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410. | Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5), 833-861. | Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman. | Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170. | Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt