Can Security Experts Use Generative AI without Prompt Engineering Skills?

SeniorTechInfo
3 Min Read

The Role of Generative AI in Information Security Training

Generative AI is rapidly gaining popularity across various industries, with professionals exploring its potential for tasks such as creating information security training materials. But the question remains, will it truly be effective?

Recently, Brian Callahan, senior lecturer and graduate program director at Rensselaer Polytechnic Institute, along with undergraduate student Shoshana Sugerman, presented the findings of their experiment at the ISC2 Security Congress in Las Vegas. Their study focused on using ChatGPT to create cybersecurity awareness training materials.

Experiment Involving ChatGPT in Cyber Training

The main objective of the experiment was to determine how effectively security professionals could train an AI to create realistic security training materials. Three groups were given the task of creating cybersecurity awareness training using ChatGPT: security experts with ISC2 certifications, self-identified prompt engineering experts, and individuals with both qualifications. The training materials were then distributed to the campus community for feedback on their effectiveness.

The researchers hypothesized that there would be no significant difference in the quality of training. However, the results revealed differences in the perceived efficacy of the training created by different groups.

Individuals who took the training designed by prompt engineers rated themselves as more adept at avoiding social engineering attacks and password security, while those who took the training designed by security experts rated themselves more adept at recognizing and avoiding social engineering attacks, detecting phishing, and prompt engineering. Interestingly, those who took the training designed by both security experts and prompt engineers rated themselves more adept at cyber threats and detecting phishing.

Challenges and Limitations

Despite the positive ratings from training takers, the researchers encountered challenges with ChatGPT-generated content. In some instances, the AI made mistakes, leading to inaccurate information being included in the training materials. This highlights the importance of thorough review and editing when utilizing generative AI for content creation.

Furthermore, the researchers noted that disclosure of AI-generated content is essential. Participants had mixed reactions upon learning that the training materials were created using AI, emphasizing the need for transparency in the use of AI for educational purposes.

Although the experiment provided valuable insights into the potential of generative AI in information security training, Callahan pointed out some limitations. Future studies may explore the effectiveness of training created entirely by humans as a comparison to AI-generated content.

In conclusion, while generative AI shows promise as a tool for creating training materials, it also poses risks and challenges that must be addressed. Transparency, thorough review, and continuous improvement are essential factors to consider when incorporating AI into educational practices.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *