Why is Claude an Electron app?
Electron vs. Native App Debate
- Many argue that if “coding is (largely) solved,” Claude’s flagship app should showcase this via fast, polished native clients (Win32/SwiftUI/GTK/Qt) instead of an Electron wrapper.
- Others respond that cross‑platform speed and feature parity still matter more than tech purity; Electron is a rational tradeoff when one codebase must cover web and desktop.
- Several note you don’t have to ship native on all platforms: one strong native macOS client plus web/CLI for others could be better than a mediocre Electron app everywhere.
Anthropic’s Stated Rationale
- Members of the Claude Code team say:
- Their engineers already know Electron/web tech and co‑maintain Electron.
- Shared code guarantees consistent look‑and‑feel between web and desktop.
- Claude is particularly strong at web stack coding; the app also includes Rust/Swift/Go where appropriate.
- They frame it as a pragmatic tradeoff, not an ideological commitment, and say the stack could change later.
App Quality, UX, and Performance
- Many users describe the Claude desktop app as slow, janky, resource‑hungry, and inferior to just using the web UI or CLI/TUI; some uninstalled it.
- Others push back: Electron isn’t inherently bad (citing VS Code, Obsidian); the issue is Anthropic’s implementation and performance engineering.
- Complaints also cover missing/buggy Linux support, lack of multi-window support, and awkward login flows.
“Code Is Free” / “Coding Is Solved” Skepticism
- Commenters highlight the gap between marketing (“coding is largely solved,” AI can rewrite compilers) and reality:
- Claude Code itself is seen as buggy, with a large public issue backlog.
- Teams using Claude heavily report systems “as buggy as ever.”
- Reviewing, testing, design, integration, and UX still dominate effort; code generation is only one piece.
- Several stress that AI is much better at mainstream web/JS stacks than at diverse native toolkits, which biases stack choices and reinforces Electron/web dominance.
Long‑Term Concerns About AI‑Written Code
- Worries center on:
- Mountains of code no human truly understands, making maintenance and on‑call debugging harder.
- Developers losing hands-on coding skill and mental models as they outsource more to agents.
- Others counter that careful use (strong tests, human review, good architecture) can make AI a huge productivity boost without giving up control.