48% of Security Pros See AI as Risky

SeniorTechInfo
3 Min Read

The Risks and Rewards of AI in Security

In a recent survey of 500 security professionals by HackerOne, a security research platform, it was revealed that 48% believe AI poses the most significant security risk to their organizations. The concerns surrounding AI security include leaked training data (35%), unauthorized usage (33%), and the hacking of AI models by outsiders (32%). These fears emphasize the critical need for companies to reevaluate their AI security strategies to prevent vulnerabilities from turning into real threats.

AI Tends to Generate False Positives for Security Teams

While the full Hacker Powered Security Report is set to be released later this fall, a HackerOne-sponsored SANS Institute report discovered that 58% of security professionals believe that security teams and threat actors could engage in an “arms race” using generative AI tactics. Security professionals have found success in using AI to automate tasks (71%), but they also recognize the potential for threat actors to exploit AI for malicious purposes, such as AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).

In a press release, Matt Bromiley, an analyst at the SANS Institute, emphasized the importance of teams finding the right applications for AI while acknowledging its limitations to avoid creating more work for themselves. An external review of AI implementations was suggested as the most effective way to identify security issues by 68% of the surveyed professionals.

HackerOne Senior Solutions Architect, Dane Sherrets, noted that teams are now more aware of AI’s limitations and the importance of human context in security operations. Despite the challenges, AI can enhance productivity and perform tasks efficiently.

Additional insights from the SANS 2024 AI Survey include:

  • 38% plan to adopt AI in their security strategy in the future.
  • 38.6% have faced shortcomings in using AI to detect or respond to cyber threats.
  • 40% cite legal and ethical implications as a challenge to AI adoption.
  • 41.8% of companies have faced pushback from employees skeptical of AI decisions.
  • 43% currently use AI in their security strategy.
  • AI technology is commonly used in anomaly detection systems, malware detection, and automated incident response.
  • 58% of respondents reported that AI systems struggle to detect new threats.
  • 71% of those who faced shortcomings with AI reported false positives as a common issue.

Anthropic Engages Security Researchers on AI Safety Measures

Generative AI maker Anthropic recently expanded its bug bounty program on HackerOne. The company is seeking input from the hacker community to test the safety measures intended to prevent misuse of their AI models, offering rewards for identifying new jailbreaking attacks and providing early access to their safety mitigation system.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *