--:--
CATEGORIES
AUTHORS

AI’s Double-Edged Sword: Finding Flaws Before Hackers Do

AI tools like RunSybil's Sybil detect critical vulnerabilities, but their use raises security risks as offensive actors gain access. How to balance innovation with safety?

AI cybersecurity tools detecting vulnerabilities in a system diagram

An AI found a critical security flaw no human had noticed before. Now, it’s a race to decide whether the same technology will protect us—or make us more vulnerable.

RunSybil’s AI tool Sybil detected a previously unknown GraphQL vulnerability in a customer’s system in November, revealing the tool’s ability to reason across complex technical systems. Ariel Herbert-Voss, a researcher at RunSybil, described the discovery as “a reasoning step in terms of models’ capabilities—a step change.”

Dawn Song, a cybersecurity expert at UC Berkeley, highlights that AI models like Anthropic’s Claude Sonnet 4.5 now identify 30% of known vulnerabilities in benchmarks—up from 20% in 2025. “AI agents are able to find zero-days, and at very low cost,” she said, calling this an “inflection point” in cybersecurity capabilities.

Proposed countermeasures include AI-sharing pre-launch with security researchers and “secure-by-design” code generation to offset AI-driven hacking risks.

However, RunSybil warns that AI’s accelerating coding and system-interaction abilities will disproportionately benefit offensive security actors.