With $1 Cyberattacks on the Rise, Durable Defenses Pay Off
As generative AI makes cyberattacks cheaper and more accessible, robust defenses are crucial to prevent vulnerabilities from being exploited.
['Transforming a newly discovered software vulnerability into a cyberattack used to take months. Today, generative AI can do the job in minutes, often for less than a dollar of cloud-computing time. But while large language models present a real cyberthreat, they also provide an opportunity to reinforce cyberdefenses.
Anthropic reports its Claude Mythos preview model has already helped defenders preemptively discover over a thousand zero-day vulnerabilities, including flaws in every major operating system and web browser, with Anthropic coordinating disclosure and its efforts to patch the revealed flaws.', "It is not yet clear whether AI-driven bug finding will ultimately favor attackers or defenders. But to understand how defenders can increase their odds, and perhaps hold the advantage, it helps to look at an earlier wave of automated vulnerability discovery. In the early 2010s, a new category of software appeared that could attack programs with millions of random, malformed inputs—a proverbial monkey at a typewriter, tapping on the keys until it finds a vulnerability.
When such 'fuzzers' like American Fuzzy Lop (AFL) hit the scene, they found critical flaws in every major browser and operating system. The security community's response was instructive. Rather than panic, organizations industrialized the defense.", 'For instance, Google built a system called OSS-Fuzz that runs fuzzers continuously, around the clock, on thousands of software projects.
So software providers could catch bugs before they shipped, not after attackers found them. The expectation is that AI-driven vulnerability discovery will follow the same arc. Organizations will integrate the tools into standard development practice, run them continuously, and establish a new baseline for security.
But the analogy has a limit. Fuzzing requires significant technical expertise to set up and operate. It was a tool for specialists.
An LLM, meanwhile, finds vulnerabilities with just a prompt—resulting in a troubling asymmetry.', "Attackers no longer need to be technically sophisticated to exploit code, while robust defenses still require engineers to read, evaluate, and act on what the AI models surface. The human cost of finding and exploiting bugs may approach zero, but fixing them won't. Is AI Better at Finding Bugs Than Fixing Them?
In the opening to his book Engineering Security (2014), Peter Gutmann observed that 'a great many of today's security technologies are 'secure' only because no one has ever bothered to look at them.' That observation was made before AI made looking for bugs dramatically cheaper.", 'The natural policy response to the problem is to go after AI at the source: holding AI companies responsible for spotting misuse, putting guardrails in their products, and pulling the plug on anyone using LLMs to mount cyberattacks. However, blocking a few bad actors does not make for a satisfying and comprehensive solution. At a root level, there are two reasons why policy does not solve the whole problem.
Source: IEEE Spectrum