Dwain.B
15 Apr 2024
Recent research reveals that OpenAI's GPT-4 can exploit real-world vulnerabilities more effectively than its predecessors and other AI models just by analysing CVE security advisories.
Recent research reveals that OpenAI's GPT-4 can exploit real-world vulnerabilities more effectively than its predecessors and other AI models. By analysing CVE security advisories, GPT-4 demonstrated an 87% success rate in exploiting disclosed but unpatched vulnerabilities. This capability could be particularly dangerous if leveraged by script kiddies, who could use this technology to conduct attacks with tools they do not fully understand, raising significant security concerns. The study underscores the need for robust cybersecurity defences against emerging AI capabilities.
Read more on the original article from The Register here.