Tailscale Peer Relays
What Peer Relays Are Solving
- Positioned as a replacement/alternative to Tailscale’s DERP relays when NAT traversal fails.
- Let you designate one or more of your own nodes as traffic relays so that two hard-to-connect peers can both connect to that relay instead of using Tailscale’s shared DERP servers.
- Main benefit: potentially much higher throughput and lower latency, since you control location and bandwidth.
Tailnets, Sharing, and src/dst Semantics
- Initial confusion around how this works with shared devices across tailnets and the src/dst terminology in policies.
- Clarification: relays and both peers must be in the same tailnet, but relay bindings are visible across tailnet sharing; should “just work” in sharing scenarios.
- Typical pattern: src = stable host behind strict NAT; other devices (e.g. laptops) reach it via the relay.
Performance and Throughput
- Several users report DERP as slow and used more often than they’d like. Peer relays seen as a way to avoid DERP congestion.
- Some are trying to push multi‑Gbps site‑to‑site over WireGuard/Tailscale and hit CPU or other bottlenecks; suggestions focus on basic profiling rather than specific tuning tips.
Local / Offline Connectivity & Control Plane
- Confusion about whether this enables offline LAN-only operation; answer: local direct connections already work if peers are “direct” and not via relays.
- Headscale is mentioned as a way to keep local connectivity when Tailscale’s control plane or internet is down.
- A recent control-plane outage is cited as motivation to self-host or improve resilience; Tailscale staff acknowledge this and say they’re working on better outage tolerance.
Comparisons to Other Mesh VPNs
- Long thread contrasting Tailscale with tinc, WireGuard alone, Nebula, innernet, ZeroTier, Netbird.
- Points raised:
- tinc: true mesh, relays everywhere, no central server, but aging, performance and reliability issues reported.
- WireGuard: fast and simple but manual peer config and limited NAT traversal without helpers.
- Nebula/innernet/ZeroTier/Netbird: various degrees of built‑in discovery, relays, self-hostability; often lack “MagicDNS‑like” convenience.
Pricing, Centralization, and Trust
- Some pushback on “two relays free, talk to us for more,” arguing users are donating their own infra and also reducing Tailscale’s bandwidth bill.
- Tailscale staff say they doubt they’ll charge, but cap it now to avoid later “rug pulls.”
- Broader skepticism about relying on a for‑profit central service vs non‑profits or fully self‑hosted solutions; counter‑argument is that forking/matching Tailscale is non‑trivial.
Implementation Details & Limitations
- Relay uses a user‑chosen UDP port on the public IP; typically requires opening/forwarding that port on a firewall.
- Some confusion about whether to whitelist by tailnet IP range vs open to the internet; consensus: it must be reachable by peers’ public IPs, but you can restrict sources at the firewall.
- Not currently supported on iOS/tvOS due to NetworkExtension size limits.
- Forcing relay usage: suggested hack is to also designate the relay as an exit node.
- Browser support is limited because this is native UDP; discussion of possible future WebTransport/WebRTC‑based relay paths.
Automatic Multi-hop and UX Wishes
- Some would like automatic multi-hop routing via arbitrary peers in a tailnet to “heal” the mesh; others worry this hides failures and introduces privacy/consent questions about relaying others’ traffic.
- Misc requests: better clarity on src/dst in docs, easier detection of DERP vs direct vs relay (e.g., using
tailscale ping), and migration paths to passkey-based auth without big-tech IdPs.