The Atlassian Diagnosis and Why It Understates the Problem
Atlassian's recent analysis of AI adoption in the workplace identifies what it calls a "speed paradox": AI accelerates individual execution while misalignment quietly erodes shared organizational outcomes. The piece is competent as far as corporate thought leadership goes. But it stops precisely where the interesting theoretical question begins. The problem it describes is not a coordination failure in the classical sense. It is a schema failure, and the distinction matters for how organizations should respond.
The Atlassian framing treats the problem as one of misalignment between fast-moving individuals and slower organizational processes. This is a reasonable surface-level observation. But it implicitly assumes that individuals who are moving faster are actually moving in productive directions. My concern is that speed without structural understanding is not an asset - it is an amplified liability. When workers gain access to AI tools that accelerate execution, they do not automatically gain understanding of how those tools shape the epistemic environment they are operating in. They gain throughput. Those are not the same thing.
The Awareness-Capability Gap at the Organizational Level
The research on algorithmic literacy has consistently documented what I call the awareness-capability gap at the individual level: workers develop awareness that algorithms govern their outcomes, but this awareness does not translate into improved performance (Kellogg, Valentine, & Christin, 2020). What Atlassian is describing is the organizational-level analog of exactly this phenomenon. Organizations are becoming aware that AI is changing individual work patterns. That awareness is not producing better coordination. It is producing faster divergence.
The reason, I would argue, is that most organizational responses to AI adoption are procedural rather than schematic. Managers are writing new SOPs, updating workflow documentation, and issuing guidance on which tools to use for which tasks. This is the organizational equivalent of what Hatano and Inagaki (1986) call routine expertise: it prepares workers for anticipated situations while leaving them unprepared for structural novelty. When the AI tool changes its behavior, or when a new tool displaces the old one, the procedure becomes obsolete and the underlying coordination problem resurfaces in a different form.
What Schema Induction Would Actually Look Like Here
The alternative is not more documentation. It is developing what Gentner (1983) calls structural alignment: helping organizational members understand the relational features of AI-mediated work that remain stable across different tools and contexts. The question is not "how do I use this specific AI tool to complete this specific task" but rather "what does it mean for collective sense-making when individual output velocity decouples from shared epistemic ground." Organizations that build that kind of structural understanding should, in theory, generalize better when their tool stack changes. Those that build procedural fluency with specific tools will not.
Hancock, Naaman, and Levy (2020) identified a parallel dynamic in AI-mediated communication more broadly: the introduction of machine agency into human interaction changes not just the speed of communication but its epistemic character. When AI drafts the message, the communicative act carries different informational properties than when a human drafts it. Organizations treating AI adoption as a throughput problem are ignoring this epistemic dimension entirely.
The Measurement Gap Is the Real Story
What strikes me most about the Atlassian piece is not what it says but what it cannot measure. The "misalignment" it identifies is inferred from aggregate outcomes. Nobody in that analysis is measuring whether organizational members have developed accurate structural schemas about how their AI tools work, how those tools shape the information environment, and how individual AI-assisted outputs interact with collective decision-making processes. The measurement apparatus does not exist at most organizations, which means the interventions being designed in response to the speed paradox are themselves operating without feedback.
This is a tractable empirical problem, not a philosophical one. The field needs instruments that distinguish between workers who have procedural familiarity with AI tools and workers who have developed genuine structural schemas about AI-mediated coordination. Gagrain, Naab, and Grub (2024) have moved in this direction in the context of algorithmic media use, but organizational settings present different coordination demands than media consumption. The variance in organizational outcomes that Atlassian is gesturing at is almost certainly not uniformly distributed. Some teams are solving the coordination problem and most are not. Understanding why requires getting inside the schema question, not just measuring output velocity.
The speed paradox is real. But speed is not the variable that needs managing. Structural comprehension is.
Roger Hunt