As the adoption of artificial intelligence (AI) grows throughout government, there has never been more awareness of the need to build and maintain AI systems with a clear understanding of their ethical risk. Every day, these systems shape human experience, bringing issues of trust and privacy, equity, autonomy, data integrity, and regulatory compliance into focus. But how do agencies turn a commitment to abstract ethical AI principles into a fully operational responsible AI strategy that delivers not just transparency and reduced risk but also innovation that improves mission performance?
As AI increasingly drives decisions that affect both individual lives and critical government missions, decision-makers urgently need to understand whether or not their AI systems are ethical—and to do so in a data-driven way. However, evaluating the ethical dimensions of an AI system is challenging because it requires an organization to pull off the difficult feat of “quantifying the philosophical.”
Consider the many frameworks, principles, and policies that define the field of responsible AI—such as the Department of Defense’s (DOD) AI Ethical Principles, the Principles of Artificial Intelligence Ethics for the Intelligence Community, and the Blueprint for an AI Bill of Rights. These frameworks provide agencies with overarching guidelines essential for defining an ethical vision. But they offer few tangible tools and little practical guidance to operationalize responsible AI.
What’s needed is a rigorous, risk-based method for assessing the ethical risk of AI systems—and a corresponding roadmap for taking continuous and concrete action to ensure these systems are fully and responsibly in line with mission objectives.