Boards of directors, general counsel, and risk and compliance officers increasingly face decisions about AI-mediated systems: whether to deploy them, how to govern them, and who bears responsibility when they cause harm. Public debate on these questions has become organized around the wrong variable. Asking whether AI systems are “conscious” tells us nothing about where legal and institutional responsibility lies. It encourages the attribution of agency to systems that do not possess it, obscures the individuals and institutions who do exercise control, and risks misallocating accountability in ways that will matter in litigation and in regulatory scrutiny. This paper argues that the operative question for governance is not consciousness but evaluative control: who determines the standards by which an AI system’s outputs are assessed, and who has the authority to revise those standards? Contemporary AI systems operate under evaluative criteria—training objectives, reward structures, deployment constraints—that are wholly determined by their designers and deploying organizations. These systems do not author the standards governing their own behavior. They optimize within those standards. That distinction is decisive for responsibility attribution. The paper further shows that the widely cited “AI responsibility gap”—the claim that AI decision-making creates a space in which no human actor can be held accountable—is a conceptual error. Responsibility does not migrate to AI systems; it remains, and in some cases becomes more concentrated, in the institutions and individuals who design and deploy them. This analysis is consistent with, and reinforced by, established fiduciary principles including the Caremark doctrine and its corporate law successors. The contribution is practical and diagnostic. It does not require readers to resolve debates in philosophy of mind. It provides a structural framework that allows boards, counsel, and compliance officers to identify where accountability actually lies in AI-mediated decisions—and to design governance accordingly.
Building similarity graph...
Analyzing shared references across papers
Loading...
Peter Kahl
Lexicon Pharmaceuticals (United States)
Lexmark (United States)
Building similarity graph...
Analyzing shared references across papers
Loading...
Peter Kahl (Tue,) studied this question.
www.synapsesocial.com/papers/69c4cd8dfdc3bde44891a130 — DOI: https://doi.org/10.5281/zenodo.19206501
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: