Security Leaders Discuss Banning AI-Generated Code

SeniorTechInfo
3 Min Read

Artificial intelligence has been hailed for its ability to streamline tasks for developers. However, a recent study reveals that security leaders are apprehensive about AI in coding, with 63% considering a ban due to potential risks.

Concerns center around the quality of AI-generated code, which may rely on outdated libraries and lead to subpar results in company products. Security professionals fear that AI-written code may not undergo the same rigorous review process as manually written code, raising the likelihood of errors and vulnerabilities.

Tariq Shaukat, CEO of code security firm Sonar, highlights the growing trend of companies experiencing issues and outages due to insufficient quality checks on AI-generated code. Developers may also feel less accountable for AI-written code, leading to a lax attitude towards ensuring its perfection.

The report from Venafi, “Organizations Struggle to Secure AI-Generated and Open Source Code,” shows that while 83% of organizations use AI for coding, security concerns persist. Despite the risks, 72% of respondents feel compelled to allow AI-assisted coding to stay competitive in the market.

Security Challenges and Sleepless Nights

Security professionals face challenges in keeping pace with fast-paced developers using AI and struggle to monitor the safe deployment of AI within their organizations. This lack of visibility raises worries about potential vulnerabilities slipping through unnoticed, with 59% losing sleep over the issue.

The widespread use of AI in code development is predicted to prompt a security overhaul, as 80% of respondents foresee a security crisis arising from AI-generated code. Kevin Bocek, Chief Innovation Officer at Venafi, emphasizes the delicate balance between empowering developers with AI tools and ensuring robust security practices.

Conclusion: Striking a Balance

As organizations grapple with the implications of AI in coding, it’s clear that a careful balance between innovation and security must be maintained. While the allure of AI-driven productivity is undeniable, the risks associated with unchecked AI-generated code cannot be ignored. Security leaders must navigate this complex landscape to safeguard their products and maintain trust in the face of evolving threats.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *