The Specific Problem
A model recently went public after discovering that a fashion brand had used AI-generated images replicating her likeness in a commercial advertisement without her consent. The response from commentators ranged from outrage to pragmatic legal advice, but the structural issue beneath this incident deserves closer attention. This is not primarily a story about one bad actor in the fashion industry. It is a story about what happens when the production infrastructure for identity representation becomes algorithmic, cheap, and organizationally unaccountable.
The Competence Inversion in AI-Mediated Commercial Identity
Classical contract law and intellectual property frameworks assume that the party seeking to use someone's likeness possesses at minimum a working knowledge of what that use entails. The assumption is one of ex-ante competence: you know what you are doing before you do it. What the fashion brand's conduct reveals is something closer to what I would call competence inversion. Generative AI tools lower the technical barrier to producing convincing likeness-derived images so dramatically that organizations can now produce and deploy those images before developing any meaningful understanding of the legal, ethical, or relational consequences. The technical capability outpaces the organizational schema for what that capability means.
This inversion is not incidental. It is structural. Kellogg, Valentine, and Christin (2020) document how algorithmic systems at work routinely generate outputs whose logic is opaque even to the people deploying them. The fashion brand in this case likely did not architect a deliberate theft. More probably, someone used a tool, the tool produced a plausible output, and no organizational process existed to ask whether plausible also meant permissible. The absence of that process is the finding worth examining.
Folk Theories of AI Capability and Why They Fail Organizations
There is a pattern in how organizations currently reason about AI-generated content. What circulates internally tends to be what I would call folk theories: informal, impressionistic accounts of what AI tools can and cannot do, assembled from marketing materials, news coverage, and peer conversation. A folk theory of generative image AI might hold that it "creates new images" and therefore produces nothing that belongs to anyone. This is technically incoherent and legally dangerous, but it is also entirely predictable given how these tools are marketed and how organizations typically onboard them.
Gagrain, Naab, and Grub (2024) distinguish between folk theories of algorithmic systems and structural schemas that accurately represent how those systems function. The gap between the two is consequential. An organization operating on a folk theory of generative AI treats likeness replication as a default feature without liability implications. An organization with an accurate structural schema understands that training data provenance, output similarity thresholds, and commercial deployment each carry distinct legal and reputational exposure. The model who went public in this case paid the cost of someone else's folk theory.
The Accountability Gap Is an Organizational Design Problem
Sundar (2020) argues that machine agency introduces a specific problem for accountability: when an artifact produces an outcome, human actors diffuse responsibility across the human-machine interaction in ways that no single party fully owns. The fashion brand's likely internal narrative - that the AI produced the image, not any individual employee - is a precise illustration of this diffusion. Accountability becomes nobody's because it is spread across a tool, a user, a manager who approved the campaign, and a procurement decision that selected the tool in the first place.
This is not a reason to treat AI tools as uniquely sinister. It is a reason to treat the organizational structures surrounding those tools as the actual unit of analysis. Rahman (2021) describes how algorithmic systems function as invisible cages, constraining worker behavior through mechanisms that are neither visible nor formally acknowledged. For the model in this case, the constraint is reversed but the invisibility is the same: an algorithmic production process generated consequences for her without her participation, her knowledge, or any organizational checkpoint that might have caught the problem.
What This Event Actually Predicts
The legal advice being offered to the model - sue and get paid - is correct as far as it goes. But litigation addresses individual instances rather than structural conditions. What this event predicts is a wave of similar incidents as generative image tools diffuse further into commercial marketing workflows staffed by people with folk-theory-level understanding of what those tools produce. The variance in outcomes will not follow firm size or technical sophistication in any simple way. It will follow whether organizations have developed accurate schemas for AI-mediated content production or are operating on the impressionistic accounts that currently dominate. Hatano and Inagaki (1986) called this the difference between routine and adaptive expertise. Organizations running generative AI campaigns on procedural autopilot are accumulating routine expertise in a domain where the rules are actively being written. That is a predictable recipe for exactly the kind of incident this model experienced.
References
Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt