The Rise of Concerns Over Data Privacy in Generative AI
According to a recent report by Deloitte, concerns over data privacy in relation to generative AI have seen a significant surge. While only 22% of professionals ranked it among their top three concerns last year, this year that figure has risen to 72%.
The next highest ethical concerns related to GenAI were transparency and data provenance, with 47% and 40% of professionals ranking them in their top three this year. Surprisingly, only 16% expressed concern over job displacement.
With staff becoming more curious about how AI technology operates, particularly in relation to sensitive data, a recent study by HackerOne found that nearly half of security professionals consider AI to be risky, citing leaked training data as a major threat.
Business leaders are also increasingly prioritizing security, with 78% ranking “safe and secure” as one of their top three ethical technology principles. This marks a significant 37% increase from 2023, highlighting the growing importance of security in the technology landscape.
The survey results are part of Deloitte’s 2024 “State of Ethics and Trust in Technology” report, which surveyed over 1,800 business and technical professionals worldwide about the ethical principles they apply to technologies, specifically GenAI.
High-profile AI Security Incidents are Drawing More Attention
Around half of the respondents to this year’s and last year’s reports stated that cognitive technologies like AI and GenAI pose the biggest ethical risks compared to other emerging technologies. This increased focus on AI ethics may be attributed to a broader awareness of the importance of data security, fueled by well-publicized incidents like the OpenAI ChatGPT bug that exposed personal data of subscribers.
As trust in AI technologies is being challenged by security incidents, industry leaders like Beena Ammanath, Global Deloitte AI Institute and Trustworthy AI leader, emphasize the need for evolved ethical frameworks to ensure positive impact.
Impact of AI Legislation on Organizations Worldwide
Despite growing adoption of GenAI in the workplace, decision-makers are increasingly concerned about ensuring compliance with AI legislation. The introduction of the E.U. AI Act and the U.S. AI Executive Order has led companies to prioritize ethical tech policies to avoid regulatory penalties.
The E.U. AI Act imposes strict requirements on high-risk AI systems, with non-compliance resulting in significant fines. Companies like Amazon, Google, Microsoft, and OpenAI have voluntarily committed to implementing the Act’s requirements to demonstrate responsible AI deployment and avoid legal challenges.
In response to AI legislation, organizations globally have made changes to their use of AI technologies. The accelerated adoption of GenAI has highlighted the need for companies to prioritize ethical standards in AI governance to ensure responsible and beneficial use of AI tools.