As artificial intelligence (AI) systems become increasingly autonomous, scalable, and embedded in critical digital infrastructure, AI safety has emerged as a significant consideration for cybersecurity, system reliability, and institutional trust. Advances in large language models and agentic systems expand the threat surface to include misalignment, large-scale misuse, opaque decision-making, and cross-border risk propagation, while existing debates remain fragmented across technical, ethical, and geopolitical domains. This paper conducts a structured comparative analysis of AI safety perspectives from ten influential thinkers, examining them across five dimensions and reframing their insights through a cybersecurity lens spanning national governance, industry standards, and firm-level design. Building on this synthesis, the study proposes a layered control architecture that organizes technical safeguards, governance mechanisms, and human oversight into a defense-in-depth structure. The framework is conceptual and theory-building, intended to clarify system-level security reasoning and support future empirical refinement across diverse institutional contexts.
Building similarity graph...
Analyzing shared references across papers
Loading...
Young B. Choi
Paul Hong
Young Soo Park
Systems
University of Toledo
Regent University
Midwest University
Building similarity graph...
Analyzing shared references across papers
Loading...
Choi et al. (Mon,) studied this question.
www.synapsesocial.com/papers/69e866f16e0dea528ddeb390 — DOI: https://doi.org/10.3390/systems14040447