Tailscale just changed the game for DevOps engineers and system administrators running distributed, self-hosted infrastructure: Peer Relays are now generally available, providing a high-throughput, tailnet-native alternative to Tailscale’s managed DERP relay network. If you’ve ever struggled with flaky peer-to-peer connectivity across firewalled networks, cloud VMs, or edge devices, this release gives you new control, performance, and observability for your secure mesh—without exposing a single port to the public internet.
Key Takeaways:
- Peer Relays let you use your own Tailscale nodes as high-throughput, private relays—no need to rely exclusively on DERP
- Major performance gains: optimized path selection, vertical scaling, and multi-socket UDP boost throughput for large or busy tailnets
- Static endpoints make Peer Relays a game-changer for restrictive cloud and multi-NAT environments
- Security and visibility are first-class: all traffic remains encrypted, with no custom ports or firewall holes required
- DevOps teams can now fine-tune mesh connectivity across self-hosted, cloud, and remote endpoints—see production-ready setup examples below
Why Peer Relays Matter for Production Networks
In a perfect world, Tailscale links your nodes peer-to-peer, using WireGuard under the hood for low-latency, encrypted connections. But as Tailscale’s announcement acknowledges, real-world networks are rarely this cooperative. Firewalls, carrier-grade NAT, cloud security groups, and overlapping address ranges often force traffic through Tailscale’s global DERP relay infrastructure. While DERP is reliable, it’s geographically distributed and multi-tenant, which can impact throughput and introduce unpredictable hops for some workloads.
Peer Relays address this head-on by letting you run your own relay on any Tailscale-enabled node. Your tailnet traffic uses these relays when direct connections fail, keeping all data inside your private network and under your control. This is especially impactful for:
- Cloud VMs behind restrictive security groups (e.g., AWS, GCP, Azure) where inbound UDP is blocked
- Home labs and edge devices where ISP NAT or CGNAT break direct connections
- Hybrid environments mixing on-prem, cloud, and remote endpoints with varying network constraints
Unlike custom WireGuard relay setups—which require manual configuration, public IPs, and firewall tuning—Peer Relays inherit all Tailscale’s simplicity. You don’t expose new ports or weaken your security posture. All traffic remains end-to-end encrypted, and relay nodes are authenticated members of your tailnet.
For context on how this compares to other remote access patterns, see our recent coverage of legacy hardware remote access, where NAT traversal and secure connectivity were recurring pain points.
Deploying Peer Relays: Step-by-Step
Setting up Peer Relays is intentionally simple—no custom firewall rules, no need to expose public services. Below is a practical, production-ready workflow for enabling Peer Relays on a Tailscale node (using a TrueNAS appliance as an example, but the pattern applies to any Linux server or VM):
1. Prerequisites
- TrueNAS 24.10+ (or any supported Linux OS)
- Active Tailscale account with an Auth Key generated (TrueNAS docs)
- Admin access to the node you’ll promote as a Peer Relay
2. Install Tailscale and Authenticate
# On your relay candidate (TrueNAS or Linux VM):
sudo tailscale up --authkey <YOUR_AUTH_KEY>
This command brings the node into your tailnet using your Auth Key. Authentication can be automated for cluster deployments.
3. Enable Peer Relay Functionality
# Promote this node to act as a Peer Relay:
sudo tailscale up --advertise-relay
--advertise-relay marks the node as available to relay traffic for peers that can’t establish direct connections.
4. Verify Relay Status and Metrics
# On the relay node, check relay status:
tailscale status --json | jq '.PeerRelays'
This JSON output will show relay usage statistics, connected peers, and traffic metrics. For more monitoring, integrate with your existing logging stack.
5. Direct Peers to Use the Relay (Optional)
# On the client node (if you want to pin a preferred relay):
sudo tailscale up --relay=100.x.y.z # Use the relay's Tailscale IP
This forces relay selection in environments with multiple candidate relays, allowing you to optimize for proximity, bandwidth, or compliance.
Summary Table: Peer Relay Setup Steps
| Step | Command / Action | Purpose |
|---|---|---|
| Install Tailscale | sudo tailscale up --authkey <KEY> | Join node to tailnet |
| Enable Relay | sudo tailscale up --advertise-relay | Advertise node as Peer Relay |
| Check Status | tailscale status --json | jq '.PeerRelays' | Monitor relay usage |
| Pin Relay (optional) | sudo tailscale up --relay=100.x.y.z | Force client to use a relay |
For TrueNAS-specific setup details, refer to the official TrueNAS remote access docs.
Performance, Control, and Observability: What’s Changed
The general availability release of Peer Relays brings tangible improvements beyond simple NAT traversal. According to Tailscale’s engineering team, several key changes are now production-grade:
- Vertical scaling: Relays now handle more clients and higher throughput, thanks to lock contention fixes and multi-UDP socket support
- Optimal path selection: Clients prefer the best available relay interface and address family, boosting reliability even in complex topologies
- Static endpoints: You can run relays with static, known addresses—ideal for cloud deployments using reserved internal IPs or hostnames
- Improved metrics: Relay nodes expose usage, connection quality, and traffic stats for integration with observability platforms
In practice, this means bulk data transfers, remote snapshots, or database replication jobs—once limited by DERP bottlenecks—can now achieve near-mesh performance even in locked-down environments. This has particular impact for storage appliances (TrueNAS, Synology), distributed CI/CD runners, or remote developer desktops that must operate behind restrictive firewalls.
For DevOps teams used to wrestling with IPsec tunnels, port forwarding, or DIY WireGuard relays, this is a dramatic reduction in operational risk and effort. Security is uncompromised: all relay traffic remains end-to-end encrypted, and relay nodes are authenticated, preventing unauthorized traffic relay.
This direction reflects a broader shift toward user-controlled, mesh-native relaying—mirroring trends in other infrastructure layers we highlighted in our analysis of diagram and data portability risks.
Advanced Patterns and Edge Cases
Peer Relays are not just a fallback—they unlock new design patterns for production, especially in mixed or “hostile” network scenarios:
- Dedicated relay pool: Run a small fleet of always-on Peer Relays (e.g., in different data centers or cloud regions) to guarantee high availability and geographic locality for your tailnet
- Compliance zoning: Ensure that data never leaves a country or region by pinning relays to in-jurisdiction nodes—critical for legal or regulatory requirements
- Cost control: Use Peer Relays to avoid egress charges from public DERP nodes when moving data within the same cloud provider
- Multi-hop mesh: In extremely restricted environments, chain relays to traverse multiple NAT layers while maintaining observability and audit trails
These approaches are fully compatible with Tailscale’s ACLs, device keys, and audit logging, preserving the security model even as you add relay nodes. All relay configuration and monitoring can be automated using standard configuration management tools or Tailscale’s API.
Peer Relays can coexist with DERP, allowing seamless failover. If a Peer Relay becomes unavailable, Tailscale falls back to DERP—no manual intervention required.
Common Pitfalls and Pro Tips
Peer Relays are robust, but several real-world issues can trip up even experienced operators. Here’s what to watch for:
- Insufficient relay capacity: A single relay node can saturate with heavy traffic. Monitor usage and scale out relay nodes as needed for high-traffic tailnets.
- Relay node availability: If your only relay goes offline (reboot, patching, network hiccup), clients will revert to DERP—potentially reducing throughput or increasing latency.
- Cloud firewall rules: Even as a relay, the node must be able to make outbound UDP connections. Some cloud providers block UDP egress by default—double-check your security groups.
- Version compatibility: Ensure all nodes (clients and relays) are running a Tailscale release supporting Peer Relays (24.10+ for TrueNAS; check official release notes for your platform).
- Metrics integration: Don’t overlook observability—relay stats should be integrated with your central monitoring. Failure to do so can mask relay saturation or downtime until users complain.
- Auth key scope: Use machine-scoped or ephemeral Auth Keys for automation, not user-scoped keys, to avoid accidental privilege escalation.
For a deeper dive on secure remote access patterns and lessons learned from legacy system management, see our legacy hardware connectivity analysis.
Conclusion & Next Steps
Tailscale Peer Relays move secure, high-performance mesh networking from “nice-to-have” to production-ready for hybrid, cloud, and edge deployments. By letting your own nodes serve as encrypted, authenticated relays—without extra firewall exposure—you get unprecedented control and flexibility.
Start by enabling Peer Relays on a low-latency, always-on node in your infrastructure. Monitor relay stats, scale out as needed, and layer in compliance zoning for regulated workloads. For more advanced scenarios, experiment with relay pools or multi-hop meshes, and integrate relay monitoring into your central observability stack.
For further reading, review the official Peer Relays GA announcement and TrueNAS VPN setup guide. If you’re evaluating other mesh or VPN options, compare with classic WireGuard relay patterns (WireGuard docs). And for more on long-term infrastructure, see our analysis of diagram persistence and lock-in.



