Claude is an Electron App because we've lost native
Why Electron for Claude and many apps
- Many see Electron as the pragmatic choice: one codebase for web + desktop (Win/Mac/Linux), faster iteration, easier hiring (JS/React skills are common).
- It makes Linux support and “it just works on lots of machines” feasible, which some view as a major win compared to fragile, per‑OS native stacks.
- Some argue businesses optimize for time‑to‑market and feature velocity, not peak efficiency or platform purity.
Critiques of Electron and Claude’s desktop app
- Frequent complaints: high RAM/CPU use, battery drain, jank in large conversations, slow startup, broken rendering, and poor performance compared to native editors or old software.
- Claude’s desktop and Claude Code are criticized as slow and buggy even on high‑end Macs.
- Several point out that many Electron apps are literally the website in a wrapper, with minimal desktop integration.
Debate: Native vs Web / Electron
- Pro‑native views: better performance, lower memory, smoother UI, tighter OS integration, and more consistent platform UX. Examples cited include CAD, 3D, video, and editors like Sublime/Zed.
- Anti‑native or skeptical views: native toolkits are fragmented (Win32/WPF/UWP/WinUI; AppKit/UIKit/SwiftUI; GTK/Qt), often unstable or deprecated, and require multiple teams.
- Some argue OS vendors’ API churn (especially on Windows, somewhat on macOS) makes deep native investment risky. Others say this is overstated and native remains solid.
Cross‑platform alternatives
- Tauri, Wails, Qt, Avalonia, MAUI, Flutter, Jetpack Compose, GPUI and Java/Swing are all mentioned as options.
- Tauri gets praise for small binaries and Rust integration but is reported to have rough edges (testing, macOS sandboxing, Wayland issues).
- Qt draws mixed responses: powerful and native‑ish, but licensing and web sharing are concerns.
Role of AI/LLMs in development
- Some suggest LLMs could and should generate efficient native apps, undermining “we don’t have time” arguments.
- Others counter that current LLMs still need extensive guidance; using them doesn’t remove economic tradeoffs.
- There’s a side discussion on whether AI should emit human‑readable code for safety, legibility, and human oversight.
User experience, performance, and hardware
- Disagreement over whether Electron is “fast enough” on modern hardware; critics highlight that many users don’t have high‑end machines and RAM is expensive.
- Several emphasize that real root cause is lack of care and incentives around performance, not the specific stack: “you can build slop with any stack.”