The Governance Gap That IT Cannot Close
A pattern reported this week captures something organizationally significant: employees are quietly using AI tools like Claude without informing their IT departments. The phenomenon has been labeled "shadow AI," borrowing from the older "shadow IT" concept, but the organizational dynamics at play are meaningfully different. Shadow IT typically involved employees procuring unauthorized software to complete defined tasks more efficiently. Shadow AI involves employees developing new task-completion strategies that their organizations have not sanctioned, do not understand, and cannot easily monitor. The distinction matters for how we theorize the governance response.
Why Standard Authorization Frameworks Fail Here
The conventional enterprise response to unauthorized tool use is access control: detect the tool, block the endpoint, write a policy. This procedural logic works reasonably well when the threat is a specific application with a discrete function. It works poorly when the capability is conversational, generative, and embedded in ordinary communication workflows. Employees using Claude to draft documents, synthesize research, or debug code are not using a shadow system that sits beside their authorized stack. They are modifying the cognitive structure of their normal work. Organizations that respond by adding an approved AI vendor to the stack have not solved the governance problem; they have merely formalized one instance of it while leaving the underlying dynamic intact.
This maps directly onto what Hancock, Naaman, and Levy (2020) identified as the challenge of AI-mediated communication: when AI is embedded in communication processes, the boundary between human judgment and algorithmic output becomes structurally ambiguous. The governance problem is not access but attribution. Organizations cannot audit what they cannot distinguish.
The Competence Inversion That IT Did Not Anticipate
Classical IT governance assumes that the organization holds superior competence and employees represent the risk surface. A policy is written by people who understand the system; compliance distributes that understanding downward. Shadow AI inverts this. The employees using Claude without authorization frequently understand the capability better than the compliance teams writing policies about it. They have developed working mental models of what the tool can and cannot do, tested those models against real tasks, and iterated. The policy, by contrast, is often written by people whose exposure to the tool is theoretical.
This is the competence inversion problem that my ALC framework takes seriously. Kellogg, Valentine, and Christin (2020) document a related dynamic in platform work: algorithms restructure task performance in ways that outpace organizational understanding of those tasks. The workers closest to the algorithm develop adaptive strategies that management cannot see and therefore cannot govern. Shadow AI reproduces this inside the enterprise, but with a twist: the "platform workers" are knowledge professionals whose autonomy is far greater and whose output is far harder to audit than, say, a gig driver's route choices.
Folk Theories Versus Structural Schemas in Compliance Design
The practical response most organizations are pursuing is AI literacy training paired with an approved-vendor policy. Both interventions target awareness rather than structural understanding. Employees learn that AI exists, that it poses risks, and that there is an approved list of tools. What they do not learn is why certain structural properties of large language models create specific categories of organizational risk. This is the distinction between folk theories and schemas that Gagrain, Naab, and Grub (2024) draw in the context of algorithmic media: awareness of an algorithm's existence does not produce accurate mental models of how it functions.
An employee who completes a compliance training module knows that AI can "hallucinate" and that data should not be uploaded to unauthorized tools. That is procedural knowledge. It does not transfer when the employee encounters a novel context, say, an AI agent that queries external APIs on their behalf, which is precisely the agentic threat surface that security researchers at Mindgard have been flagging this week as the real enterprise vulnerability. Aaron Portnoy's argument that authority, not access, is the primary risk in agentic AI systems is structurally consistent with this point: the threat moves faster than the policy because the policy was designed around the wrong mental model of what AI does.
What Governance Frameworks Actually Need
Effective enterprise AI governance requires that compliance designers and employees alike develop accurate structural schemas, not folk theories about hallucination and data leakage. That means understanding the agentic execution model, the authority delegation problem Portnoy describes, and the ways that AI tools restructure cognitive work rather than merely accelerating it. Hatano and Inagaki's (1986) distinction between routine and adaptive expertise is directly applicable here. Organizations are investing in routine expertise: follow the approved list, flag the exception. What the shadow AI pattern tells us is that the environment is changing faster than procedural compliance can track. The organizations that close this gap will be the ones that invest in building structural understanding, not just behavioral rules. The ones that do not will continue to discover, after the fact, what their employees have been doing quietly all along.
Roger Hunt