Mr. Latte
Solving the SSH Routing Dilemma: Multiplexing Shared IPv4 Addresses Without a Host Header
TL;DR Unlike HTTP, SSH doesn’t send a Host header, making it notoriously difficult to route multiple VMs through a single shared IPv4 address. To solve this, exe.dev built a custom proxy that uses a combination of the user’s SSH public key and a pooled destination IP to uniquely identify and route to the correct VM. This clever tuple-based approach bypasses the need for dedicated IPv4 addresses per machine while keeping the user experience completely seamless.
In the modern cloud era, IPv4 addresses are an increasingly scarce and expensive resource, forcing providers to share them across multiple services. While web traffic elegantly solves this IP-sharing problem using the HTTP Host header or SNI for TLS, SSH connections natively lack this capability. If you want to offer users a seamless standard SSH experience without assigning a costly dedicated IPv4 address to every single VM, you hit a hard technical wall. This fundamental protocol limitation usually forces developers to either rely on clunky non-standard ports or engineer highly creative proxying solutions.
Key Points
The core challenge arises because an SSH proxy receiving a connection on a shared IP has no built-in way to know which backend VM the user is trying to reach. To circumvent this, the team at exe.dev designed an innovative architecture utilizing a shared pool of public IPv4 addresses. Instead of assigning a globally unique IP to each VM, they assign an IP that is only unique relative to the specific user who owns it. When an incoming SSH connection hits the proxy, it evaluates the destination IP address alongside the user’s presented SSH public key. By combining these two pieces of data into a unique tuple, the proxy can accurately route the connection to the exact VM the user intended to access.
Technical Insights
From an engineering standpoint, this is a brilliantly pragmatic workaround to a protocol-level deficiency, but it comes with distinct architectural trade-offs. Traditional workarounds usually involve assigning unique, non-standard SSH ports to different VMs or forcing users to configure SSH jump hosts, both of which degrade the developer experience. By shifting the complexity to the backend proxy, the user gets a frictionless, standard SSH command out of the box. However, this approach requires highly bespoke infrastructure management, as the provisioning system must carefully orchestrate IP pool allocations per user to avoid collisions. Furthermore, the proxy must be able to read the original destination IP, which introduces significant networking friction in cloud environments where public IPs are heavily NATed into private VPC addresses.
Implications
For cloud providers and platform engineers, this strategy highlights a viable path to preserving IPv4 resources without compromising on user experience or resorting to IPv6-only environments. It demonstrates that identity-aware proxying—using authentication material like public keys as routing logic—can effectively solve lower-level networking limitations. While building a custom proxy might be overkill for a small homelab, larger organizations can adopt this tuple-based routing concept to drastically reduce their cloud provider bills.
As IPv4 scarcity continues to drive up infrastructure costs, we will likely see more of these clever protocol-bending proxies emerge in the wild. Will SSH eventually adopt a native routing extension similar to TLS SNI, or will identity-based proxying become the new standard for infrastructure access? It is a fascinating space to watch as the industry bridges the gap between legacy protocols and modern cloud economics.