The Current Landscape of AI Ethics and Governance
AI ethics and governance have quickly become crowded fields, with everyone from policymakers to influencers weighing in on the conversation. The OECD tracker alone lists over 1,800 national-level documents detailing various initiatives, policies, frameworks, and strategies as of September, 2024.
Despite the wealth of high-level guidance available, there is still a noticeable gap between policy and real-world implementation. As Mittelstadt (2021) points out, simply having ethical principles in place does not guarantee that AI systems will act ethically.
So why does this gap exist, and how can leaders in data science and AI work to bridge it? In this series, we will dive into the practical aspects of AI ethics and governance within organizations, breaking down this gap into three key components. Drawing from both research and real-world experience, we will propose strategies and structures that have proven to be effective in implementing AI ethics and governance at scale.
The Interpretation Gap: Bridging Principles and Practice
One of the key challenges in implementing AI ethics and governance is what we like to call the “interpretation gap.” This gap arises from the difficulty of translating broad ethical principles, such as ‘human centricity,’ into actionable guidelines that can be applied in real-world scenarios.