I am running a bare metal Kubernetes cluster on k3s with Kube-VIP and Traefik. This works great for services that use SSL/TLS as Server Name Indication (SNI) can be used to reverse proxy multiple services listening on the same port. Consequently, getting Traefik to route multiple web servers receiving traffic on ports 80 or 443 is not a problem at all. However, I am stuck trying to accomplish the same thing for services that just use TCP or UDP without SSL/TLS since SNI is not included in TCP or UDP traffic.
I tried to setup Forgejo where clients will expect to use commands like git clone git@my.forgejo.instance....
which would ultimately use SSH on port 22. Since SSH uses TCP and Traefik supports TCPRoutes, I should be able to route traffic to Forgejo’s SSH entry point using port 22, but I ran into an issue where the SSH service on the node would receive/process all traffic received by the node instead of allowing Traefik to receive the traffic and route it. I believe that I should be able to change the port that the node’s SSH service is listening on or restrict the IP address that the node’s SSH service is listening on. This should allow Traefik to receive the traffic on port 22 and route that traffic to Forgejo’s SSH entry point while also allowing me to SSH directly into the node.
However, even if I get that to work correctly, I will run into another issue when other services that typically run on port 22 are stood up. For example, I would not be able to have Traefik reverse proxy both Forgejo’s SSH entry point and an SFTP’s entry point on port 22 since Traefik would only be able to route all traffic on port 22 to just one service due to the lack of SNI details.
The only viable solution that I can find is to only run one service’s entry point on port 22 and run each of the other services’ entry points on various ports. For instance, Forgejo’s SSH entry point could be port 22 and the SFTP’s entry point could be port 2222 (mapped to the pod’s port 22). This would require multiple additional ports be opened on the firewall and each client would need its configuration and/or commands modified to connect to the service’s a non-standard port.
Another solution that I have seen is to use other services like stunnel to wrap traffic in TLS (similar to how HTTPS works), but I believe this will likely lead to even more problems than using multiple ports as every client would likely need to be compatible with those wrapper services.
Is there some other solution that I am missing? Is there something that I could do with Virtual IP addresses, multiple load balancer IP addresses, etc.? Maybe I could route traffic on Load_Balancer#1_IP_Address:22 to Forgejo’s SSH entry point and Load_Balancer#2_IP_Address:22 to SFTP’s entry point?
tl;dr: Is it possible to host multiple services that do not use SSL/TLS (ie: cannot use SNI) on the same port in a single Kubernetes cluster without using non-standard ports and port mapping?
Ah yes, I see. Because TCP has no SNI built-in this is not really possible.
You could try IPv6, as within even a single /64 routable prefix you can choose the address section freely. Also take a look at overlay-vpn solutions like Netbird: They allow you to offer you multiple clients, which you could use to assign multiple IPv4 to your server and then routing them differently (you mentioned installing client software before)…
Finally, I’m not sure why you would inject Treafik into the networking chain. In the end is the direct, kernel-space connection always faster than having an user-space proxy in between.
I had not thought about using IPv6 for this. It’s definitely something that I would need to research more as I know that this would expose my attack surface and may require an overhaul of the network (or at least a very thorough review).
I’m not sure I understand the concern about Traefik. I am using it as a reverse proxy and forcing HTTPS for all applicable services (which unfortunately does not apply to this particular situation). I am honestly a little confused about the control plane, tls-san, gateway, load balancer, ingress, etc. and how they all work together. I may not be using Traefik as the Load Balancer and instead have Kube-VIP as the LoadBalancer. I did not configure Kube-VIP any particular way for Load Balancing, but I did configure Traefik with a few Load Balancer specific options. When I tried to setup Kube-VIP with the additional IP addresses for load balancing, I was unable to get k3s to work correctly so I assumed that Traefik was my Load Balancer instead of Kube-VIP.