Abstract The growing demand for Responsible AI has crystallised around normative principles: fairness, transparency, accountability, privacy, safety, and value alignment, yet their implementation often reveals profound conceptual and operational instability. This research employs a constructively critical approach to examine the structural tensions underlying these pillars and argues that prevailing frameworks treat responsibility as a static compliance exercise, detached from the sociotechnical realities of AI systems. Drawing on traditions in process ethics, participatory design, and adaptive governance, the study develops a reframed understanding of Responsible AI as a dynamic, negotiated, and context-sensitive process. It advances a composite theoretical model and a layered ecosystem framework that redistributes responsibility across design, deployment, governance, and public deliberation. Through this reframing, the work offers both a critique of the dominant paradigm and a practical roadmap for interdisciplinary engagement, ethical responsiveness, and institutional reflexivity. The contribution is twofold: a conceptual synthesis that challenges the assumptions of checklist ethics, and an applied methodology with implications for AI researchers, developers, policymakers, and civil society actors working to navigate the ethical complexity of real-world AI design, deployment and use.
Building similarity graph...
Analyzing shared references across papers
Loading...
Ayodeji Ibitoye
Makuochi Nkwo
Rita Orji
AI and Ethics
Dalhousie University
University of Greenwich
Building similarity graph...
Analyzing shared references across papers
Loading...
Ibitoye et al. (Thu,) studied this question.
www.synapsesocial.com/papers/68c1bb6354b1d3bfb60ed134 — DOI: https://doi.org/10.1007/s43681-025-00809-2