The Procurement-Authority Gap
A recent commentary published through federal technology channels identified a pattern that deserves more analytical attention than it has received: the U.S. government is acquiring AI systems at a pace that consistently outstrips its capacity to assign clear decision-making authority over those systems. The core observation is straightforward. Agencies are deploying AI before they can answer a basic pre-deployment question about who is accountable when the system produces a consequential output. Governance, in other words, is being treated as a post-failure activity rather than a precondition for deployment.
This sequencing problem is not merely an administrative oversight. It reflects something structurally important about how large organizations relate to algorithmic systems, and it connects directly to questions I work with in my dissertation research on platform coordination and algorithmic literacy.
Competence Is Not Assumed, It Is Produced
The standard critique of federal AI adoption focuses on procurement speed or vendor accountability. Those are legitimate concerns. But the more theoretically interesting problem is that agencies are acquiring systems whose operational logic they do not yet have the internal competence to govern. This is not a criticism of individual agency staff. It is a structural observation about what Kellogg, Valentine, and Christin (2020) describe as the organizational embedding of algorithmic systems: when algorithms are introduced into work settings, they restructure the competence requirements of that setting without necessarily transferring the competencies needed to meet those new requirements.
The result is an authority gap that is simultaneously a literacy gap. Assigning governance authority is not meaningful if the person holding that authority lacks a structural understanding of how the system produces its outputs. A chain-of-command designation on an organizational chart does not constitute algorithmic governance. It constitutes the appearance of governance.
Why Procedural Solutions Will Not Close the Gap
The instinctive organizational response to the problem the commentary identifies will likely be procedural: new checklists, new approval workflows, new documentation requirements before deployment. These responses are predictable because they are organizationally legible. They produce artifacts that can be audited. They distribute responsibility in ways that are defensible after the fact.
But Hatano and Inagaki's (1986) distinction between routine and adaptive expertise is directly relevant here. Procedural governance documentation produces routine expertise at best. It tells an official what steps to follow under anticipated conditions. It does not produce the adaptive capacity needed to recognize when an AI system is operating outside the conditions its documentation assumed. The awareness-capability gap that I track in platform worker research applies with equal force to public sector AI governance: knowing that an AI system exists and has been approved does not translate into knowing how to evaluate whether it is behaving appropriately in a novel context.
This is why the commentary's framing - that governance will become credible when agencies can answer a simple question before deployment - is correct in direction but potentially underspecified in mechanism. The question is not only whether someone can answer the accountability question. It is whether the person answering it possesses genuine structural understanding of the system's decision logic, or whether they are producing a plausible procedural response that satisfies an audit requirement without constituting actual oversight.
The Schema Problem in Public Sector AI
What federal AI governance actually requires is something closer to schema induction at the organizational level: the development of accurate structural models of how AI systems constrain and shape the decisions attributed to human agents. Rahman (2021) describes how algorithmic systems function as invisible cages, structuring worker behavior through constraints that are difficult to perceive and even more difficult to contest. In a private platform economy context, this produces labor precarity. In a federal governance context, it produces accountability ambiguity - outputs that are nominally authorized by a human official but substantively generated by a system that official cannot adequately interrogate.
Hancock, Naaman, and Levy (2020) introduced the concept of AI-mediated communication to describe how AI systems alter the content and consequences of human communication without that alteration being transparent to participants. Federal AI deployment operates on the same logic. The agency signs the output. The algorithm produced it. The gap between those two facts is where governance credibility lives or fails.
What This Means for Organizational Theory
The federal AI procurement story is, at a structural level, a case study in what happens when organizations treat algorithmic systems as tools to be acquired rather than environments to be learned. Classical coordination theory assumes that authority structures precede and enable competent action. Platform coordination, as I argue in my ALC framework, inverts this: competence develops endogenously through participation, and authority without that endogenous competence is formal rather than functional. The federal government is currently demonstrating the public sector version of that inversion in real time, and the governance credibility problem the commentary identifies will not resolve through procurement reform alone. It will require a different theory of what organizational competence with AI systems actually means.
Roger Hunt