Zed now predicts your next edit with Zeta, our new open model

Local vs Remote, Hardware, and Privacy

  • Many want Zeta to run fully locally; a 7B model is seen as feasible even on modest GPUs and Apple Silicon, and GGUF quantizations already exist.
  • Today, Zed’s integration calls a remote endpoint (Baseten). There is an environment variable (ZED_PREDICT_EDITS_URL) that can redirect requests, and some users are already proxying to local models via llama.cpp/Ollama.
  • Several commenters are unwilling or prohibited from sending code (especially secrets/env files) to third parties. Zed’s edit prediction is opt‑in, can be disabled per file via globs, and is off by default unless you sign in and enable it.
  • Others note that cloud latency is often outweighed by faster GPUs; for them, local is about privacy/offline, not speed.

UX, Keybindings, and Workflow Friction

  • The biggest recurring gripe across tools (Copilot, Zed, others) is using Tab/Space/Enter to accept completions, colliding with indentation and normal editing.
  • Zed’s approach: when in leading whitespace or when both LSP and edit predictions are present, acceptance is moved to Alt‑Tab (or Alt‑L on Linux/Windows) to avoid conflicts; this is configurable.
  • Some users dislike any inline predictions, especially in comments, and disable them there. Others find full‑line completions helpful but only if they are nearly always correct; otherwise reviewing/fixing is slower than typing.

Model, Training, and Technical Details

  • Zeta is a LoRA fine‑tune on a Qwen2.5‑Coder model; training used a small, high‑quality dataset including ~50 synthetic examples generated with another LLM, then expanded to ~400+ examples from internal usage.
  • Commenters highlight how little data and money are needed to get a useful fine‑tune, compared to building base models.

Business Model and Pricing Concerns

  • Zeta “won’t be free forever”; this triggers pushback from users who don’t want to grow dependent before knowing the price.
  • Others are relaxed: try it now, pay later if it’s worth it, and fall back to self‑hosting since the model and dataset are open.
  • There is skepticism about Baseten’s per‑minute pricing and broader questions about how Zed intends to fund itself.

Core Editor Features and Stability

  • Some worry AI work is overtaking basics: Windows build, debugger, diff tool, robust LSP configuration, large‑file handling, font rendering (especially on low‑DPI), and mouse‑cursor‑hiding are all cited as more important.
  • Others report Zed as very fast and already using it daily, but keep VS Code/JetBrains around for debugging and certain workflows.

Broader Sentiment on AI in Editors

  • Opinions range from “AI autocomplete is transformative” to “constant prediction is a distracting nuisance.”
  • Several note organizational pressure to use AI for perceived productivity gains, even when individuals don’t want it.