What AWS Just Announced
Amazon Web Services recently launched Connect Health, a purpose-built agentic AI platform targeting the administrative and clinical support workflows that consistently drive healthcare worker burnout. The system handles scheduling, clinical documentation, medical coding, patient verification, and related tasks at a $99 price point. The framing from Amazon is straightforward: this platform does the work that hospitals cannot staff. That framing deserves scrutiny, not because the technical capability is implausible, but because it imports a set of assumptions about organizational competence that the research literature consistently challenges.
The Inversion Amazon Is Selling
Classical coordination theory assumes that organizations first develop competence and then deploy tools in service of that competence. What Amazon is proposing with Connect Health inverts this sequence. The platform arrives with pre-specified workflows, agentic decision logic, and optimization targets already encoded. The hospital does not train staff to use the platform so much as it hands over the workflow to the platform entirely. This is not a marginal distinction. When an organization abdicates the workflow rather than augmenting it, the competence that previously lived in human workers does not transfer. It dissolves.
This connects directly to what I have been developing in the Algorithmic Literacy Coordination framework. Platforms mediated by algorithmic logic do not assume pre-existing competence from participants; they generate coordination outcomes endogenously through participation. The difference with Connect Health is that participation is institutional rather than individual. The hospital as an organization becomes a participant in Amazon's coordination logic, and the variance in outcomes across hospitals will not be explained by differential access. Every subscribing hospital has the same $99 platform. The variance will be explained by something else entirely: the structural understanding that hospital administrators bring to the deployment decision.
Folk Theories at the Institutional Level
Algorithmic literacy research distinguishes between folk theories and structural schemas (Kellogg, Valentine, and Christin, 2020). Folk theories are individual impressions about how a system works, often assembled post-hoc from observed outputs. Structural schemas are accurate representations of the underlying logic that governs system behavior. This distinction, usually applied to individual gig workers or content creators, scales to institutional actors making procurement decisions. A hospital administrator who selects Connect Health because it "handles the staffing problem" is operating from a folk theory. The structural question, which is whether the platform's optimization targets are aligned with patient outcomes, clinical quality, or Amazon's retention metrics, requires a schema that most procurement processes do not develop.
Hancock, Naaman, and Levy (2020) draw attention to how AI-mediated communication alters the agency relationship between sender, system, and receiver. In clinical documentation specifically, the agentic system mediates between the clinical encounter and the coded record. If the platform's logic introduces systematic errors or omissions at that layer, the downstream consequences are not visible to the hospital in real time. Rahman's (2021) concept of the invisible cage is instructive here: the constraints imposed by the platform's algorithmic logic are structurally opaque to the organizations operating inside them.
The Staffing Shortage as a Schema Gap
Amazon's marketing premise is that hospitals cannot staff these roles. That is accurate as a description of a labor market condition. It is not accurate as a description of the underlying problem the platform resolves. Administrative and clinical documentation tasks are not simply volume problems. They are accuracy, liability, and continuity problems embedded in regulatory environments. Offloading volume to an agentic system does not eliminate institutional responsibility for the outputs. It relocates competence to a vendor while leaving liability with the provider. This is a structural feature of the arrangement that should appear in any schema a purchasing organization develops, but almost certainly will not appear in the procedural documentation Amazon provides at onboarding.
Hatano and Inagaki (1986) distinguish between routine expertise, which is the ability to execute known procedures reliably, and adaptive expertise, which is the capacity to recognize when a novel situation requires departing from procedure. Connect Health, by design, encodes routine expertise at scale. The hospitals that will use it most effectively are those whose administrators have adaptive expertise about the boundary conditions of agentic systems, specifically when to override, audit, and challenge the platform's outputs. That expertise cannot be purchased for $99.
What This Means for Organizational Theory
The Connect Health launch is a useful case for organizational theorists precisely because the stakes are high enough to make the competence gap consequential rather than merely interesting. When platforms mediate coordination in low-stakes consumer contexts, the cost of schema deficits is suboptimal outcomes for individual users. When platforms mediate coordination in acute care settings, the cost is compounded by regulatory exposure, patient harm, and the erosion of institutional knowledge that no subsequent procurement decision can recover. The organizations that will navigate this well are not the ones that move fastest. They are the ones that develop accurate structural understanding of what they are handing over before they hand it over.
References
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt