Significant raise of reports
AI-driven surge in kernel bug reports
- Kernel security list reportedly went from ~2–3 reports/week to 5–10/day, largely due to AI-powered tools.
- Early “AI slop” reports were noisy; current wave is mostly correct and forced maintainers to add more people.
- Some see this as flushing a long-standing backlog of bugs faster than they’re created.
- Others doubt this will lead to “better than pre-2000” quality or a sustainably lower bug rate.
Rust, unsafe code, and memory safety
- One side notes ~70% of vulnerabilities are memory safety issues and cites evidence that adding new Rust code reduces such bugs.
- Counterpoint: “real-world Rust uses unsafe everywhere,” so we still get memory problems; skeptics call Rust “undefined” in practice.
- Replies argue many Rust codebases use zero unsafe; unsafe is concentrated in FFI, low-level data structures, and systems code.
- General agreement that writing sound unsafe is hard but feasible; Rust’s value is in enforcing rules C/C++ only document.
Pre-2000 software quality vs today
- Some argue CD/floppy distribution and difficulty of patching forced heavier testing and fewer user-visible bugs at release.
- Others recall frequent crashes, data loss, and extremely weak security (worms, email macro viruses, early Windows exploits).
- Several note complexity and connectivity were lower, so attack surfaces were smaller even if code quality wasn’t higher.
- Nostalgia is called out: older software often required fragile configurations and manual workarounds.
Security model, CVEs, and updates
- Debate over “security bugs are just bugs” vs treating exploitable issues as special.
- Many users resist frequent updates due to breakage, new features, and compatibility churn, preferring to patch only critical CVEs.
- Kernel-side view: many correctness bugs can be security-relevant, especially with AI tools that make exploitation easier.
- Some emphasize vendor LTS and paid support as ways to mitigate update risk while still getting security fixes.
AI “slop” vs useful assistance
- Maintainers complain about low-signal LLM-generated reports that waste triage time.
- Others argue that if a report reveals a real vulnerability, its AI origin is irrelevant; the real risk is attackers using the same tools.
- Proposed responses include better triage automation and giving maintainers access to strong AI tools to keep up.