Debates in AI ethics frequently invoke the notion of a responsibility gap, suggesting that increasing algorithmic autonomy makes it difficult to identify who should be held accountable for harmful outcomes produced by artificial intelligence systems. This paper argues that the responsibility gap diagnosis often misidentifies the underlying problem. In many organizational deployments of AI systems, responsibility does not disappear; rather, it becomes distributed across layered decision architectures in which different actors shape system behavior at different stages of design, deployment, and operation. The paper introduces the concept of authority illusion to describe the structural misalignment between the location of decision authority and the point at which accountability is typically assigned. While upstream actors define system objectives, select training data, determine acceptable risk thresholds, and approve deployment conditions, responsibility frequently concentrates at downstream operational layers where human reviewers interact with system outputs. To analyze this asymmetry, the paper develops an authority mapping framework that traces decision authority across strategic, design, governance, and operational layers of AI system deployment. Drawing on insights from the philosophy of rule-following, particularly the work of Ludwig Wittgenstein, the analysis argues that decision authority in socio-technical systems is embedded within institutional practices rather than reducible to individual acts of review or intervention. The paper concludes that effective AI governance requires not only recognizing the distributed architecture of decision authority but also establishing forms of structurally independent oversight capable of auditing and intervening in upstream design and deployment decisions. By reframing the problem from a responsibility gap to a question of authority distribution within organizational architectures, the paper offers a structural perspective on accountability in contemporary AI systems. Keywords: AI governance, responsibility gap, authority illusion, authority mapping, human-in-the-loop, algorithmic accountability.
Building similarity graph...
Analyzing shared references across papers
Loading...
Mumtaz Enser
Building similarity graph...
Analyzing shared references across papers
Loading...
Mumtaz Enser (Fri,) studied this question.
www.synapsesocial.com/papers/69acc57d32b0ef16a404fbf6 — DOI: https://doi.org/10.5281/zenodo.18892746
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: