The need for memory safety standards

Language choices and productivity

  • Many argue we already have suitable memory-safe stacks: Rust for kernels/systems, GC’d languages (C#/Java/F#/Go/Lisp) for backends, TS+Wasm for frontends. Others prefer “Rust everywhere” to avoid multiple stacks.
  • Debate over when Rust is more or less productive than GC languages:
    • Pro-Rust side: recent experience shows high productivity, and the “Rust is slow to develop in” trope is outdated. It buys freedom from data races and many memory bugs.
    • Skeptical side: affine types, borrow checker, async ecosystem, and long compiles add cognitive load, especially for typical web/backend services where GC and higher-level runtimes shine.
  • Concrete async/concurrency snippets in C# and Rust are compared; some see Rust’s stricter model as “decision fatigue,” others say it’s just familiarity and niche-appropriate tradeoffs.

Alternatives: BEAM, Lisp, Go, Kotlin

  • Several advocate Elixir/Erlang (BEAM) for backends: excellent concurrency, fault tolerance, and managing huge numbers of connections without Kubernetes complexity.
  • Concerns: BEAM lacks strong static typing, though there’s ongoing work on a type system.
  • Lisp is defended as stable, low-churn, and performant enough; detractors dismiss “use Lisp for backend” as unrealistic outside niches.
  • Go is seen as “good enough” for tooling and services, with simple deployment but limited type system; .NET/Java defenders argue they now match or exceed Go on performance and tooling.
  • Kotlin gets a brief nod for null safety and immutability, though some question calling that “memory safety” over Java.

Existing C/C++ codebases and mitigations

  • Strong pushback against “just rewrite everything in Rust”: Linux, Chromium, and large C++ systems will live for decades.
  • Discussion of partial mitigations: CFI, shadow stacks, PAC, MTE, hardened allocators, bounds-checking flags, and standards like MISRA.
  • Security practitioners note these mitigations significantly raise the bar but don’t fully eliminate modern exploit classes (e.g., data-only, TOCTOU, UAF).
  • One camp says this practical hardening + input validation is “enough” for real-world risk; others argue residual risk justifies a long-term migration to memory-safe paradigms.

Memory-safe C, CHERI, and Fil-C

  • Multiple mentions of CHERI and hardware tagging (plus SPARC ADI, MTE): seen as promising but niche, hardware-dependent, and slow to deploy.
  • Large subthread on Fil-C: a modified Clang/LLVM aiming for full memory safety plus high C/C++ compatibility via capabilities/GC.
    • Advocates: Fil-C can be incrementally adopted, catches more bugs than AddressSanitizer, and is already competitive with or faster than many safe languages.
    • Critics: current 1.5–4x slowdowns, complexity, and similarity to many previous “safe C” projects that never gained traction. Questions about integer–pointer roundtrips, type confusion, and long-term performance.
  • Consensus: retrofitting full safety onto C is technically possible but hard to deploy widely; toolchain integration and ecosystem inertia are major barriers.

Standards, regulation, and incentives

  • Some see market forces as insufficient—users don’t care about implementation details, and unsafe C “works well enough.” Hence the call for government or industry standards with graded assurance levels, akin to SLSA or energy ratings.
  • Others are wary: past attempts (e.g., Ada mandates) were limited; broad regulation on memory management might “strangle” the industry or become Rust advocacy by other means.
  • Regulated domains (safety-critical) already achieve high memory safety via strict processes, at high cost and reduced flexibility (e.g., banning recursion, dynamic arrays).
  • Several note misaligned incentives: careful C programming and long-lived stable code are not rewarded; churn, shipping fast, and hype are. Standards won’t fix that alone.

Input sanitization vs memory safety

  • One thread argues many classic vulns are fundamentally input-sanitization failures (buffer sizes, format strings, SQLi, XSS, path traversal) and laments that sanitization is less “sexy” than memory safety.
  • Counterpoints:
    • Modern safe APIs (prepared statements, HTML builders) work better than ad-hoc sanitization and mirror memory-safe languages vs raw pointers.
    • Sanitization doesn’t address many memory bugs (UAF, races, type confusion) and often fails when data is reused in new contexts.
    • Proper design is about separating code and data, canonicalizing formats, and making invalid states unrepresentable, not just “filter all input.”

Data tagging, ECC, and other ideas

  • Some foresee broader “tagged data” systems (language-level or hardware) to prevent leaking secrets or credentials, inspired by Perl tainting, Rails/Elixir HTML safety, and SPARC ADI.
  • ECC RAM is raised but rejected as orthogonal: it mitigates physical bit flips, not software memory misuse.
  • Broader point: memory safety is one aspect of security; proposals include graded memory-safety metrics and combining language, hardware, and architectural practices (e.g., segregated PII stores).