Cybersecurity looks like proof of work now
Token-based arms race
- Many see AI-assisted security as resembling proof-of-work: more tokens → more vulnerabilities found, for both attackers and defenders.
- Some argue there’s no clear diminishing returns yet on complex tasks (e.g., multi-step intrusions), implying whoever spends more compute wins more often.
- Others think diminishing returns likely appear sooner on simpler targets (e.g., single libraries).
Defender vs attacker economics
- One view: security has always been about how much money/effort an adversary will commit; AI mostly changes price and speed.
- Some argue cybersecurity is “advantage defender in principle” if you can eventually close all holes with finite effort; others insist the defender’s dilemma (“attacker only needs one success”) still dominates.
- Defense-in-depth and layered checks are framed as ways to push success probability toward effectively zero, even if individual layers are imperfect.
Code simplicity, quality & formal methods
- Several comments stress that simpler, smaller, well-designed systems have inherently less attack surface and are cheaper to harden.
- Examples: preferring minimal dependencies, simple authenticated interfaces, and robust input boundaries.
- Formal verification is proposed as a way to escape the token race (“no bugs to find”), but others note its limits: requirements are hard to specify, real-world behavior is messy, and most codebases and organizations aren’t ready for it.
Practical use of LLMs in security
- Current LLM-based vuln scanning is described as primitive but already useful (per-file prompts, periodic scans, focusing on changed files, etc.).
- Defenders may have efficiency advantages: they can scan full source with context, while attackers often start from binaries, APIs, or partial access.
- LLMs are reported to be strong at decompilation, reverse engineering, and deobfuscation (e.g., binaries, JS), with high token cost but much cheaper than manual RE.
Open source, supply chain, and code access
- Once source is exfiltrated, AI can quickly audit it for privilege escalations, intensifying the impact of supply-chain and endpoint compromises.
- Popular OSS may get more aggregate scanning (by both sides), potentially driving it toward fewer vulnerabilities—if organizations actually invest in that.
- Some predict widespread cloning of commercial software and games, plus a surge in variant FOSS projects, driving more code exposure.
Skepticism and limitations
- Several commenters think the “more tokens wins” framing is overhyped or self-serving for GPU/model vendors.
- Others highlight that real-world infosec is often about policy, user behavior, and messy enterprise constraints, not just code-level bugs.
- Concerns are raised about over-reliance on AI vendors for both building and securing systems.
Broader impacts & practices
- Some foresee rising costs and expectations for externally facing “trusted” software, potentially squeezing infrastructure startups.
- Personal and org practices suggested: stricter separation of dev and personal environments, stronger authentication (e.g., hardware keys), and assuming cloud password vaults or code hosts may be breached.
- Multiple comments insist that better engineering discipline and security culture remain central; tokens help, but don’t replace “being clever” about design and process.