Researchers have found that GPT-4 is able to identify security vulnerabilities on its own and can exploit zero-day flaws by using information from common vulnerabilities and exposures (CVE). A study conducted by researchers at the University of Illinois Urbana-Champaign revealed the potential for large language models (LLM) to perform malicious actions if manipulated. They noted previous studies showing the ability of these models to hack websites, but emphasized that these were limited to simple vulnerabilities.
The researchers compiled a dataset of critical vulnerabilities and common exposures to demonstrate how GPT-4 can autonomously exploit security flaws. They found that GPT-4 was able to exploit 87 percent of the vulnerabilities, while previous models like GPT-3.5 and open source scanners like ZAP and Metasploit were not as successful. This success was attributed to the detailed CVE descriptions provided, which GPT-4 used to its advantage.
One researcher suggested that security organizations should reconsider publishing detailed reports on vulnerabilities to prevent malicious actors from exploiting them. Instead, he advocated for proactive security measures like regular updates to counter these threats. The study highlights the potential for advanced language models to be used for cybersecurity attacks, emphasizing the importance of proactive security measures in preventing exploitation of vulnerabilities by malicious agents.
The new test for detecting the papilloma virus includes a kit that allows the patient…
The Warsan factory in Dubai is set to process 2 million tons of waste per…
The Russian arms industry is thriving despite Western sanctions, with China stepping in as a…
South Africa has accused Israel of conducting a genocidal campaign against Palestinians in Gaza, specifically…
Paleontologists have made an exciting discovery, identifying the largest marine reptile known to have swam…
The "Bild" and the "New York Post" featured the covers of the extensive article prepared…