Kubernetes - Routing external traffic to local Kubernetes cluster

Table of Contents

This post is part of our ongoing DevOps series focused on Kubernetes and GitOps practices. So far, we’ve explored various aspects of DevOps with FluxCD, and our upcoming posts will cover application deployment strategies in Kubernetes.

A significant challenge when working with a local Kubernetes environment (like our K3s setup) is the lack of public network access, which is automatically handled in managed Kubernetes offerings from cloud providers. For locally hosted clusters, establishing secure public connectivity is essential for many real-world scenarios.

This post addresses this specific challenge: how to expose services from your local Kubernetes cluster to the internet. The solution we’ll implement involves a public-facing cloud server with HAProxy for Layer 4 load balancing, connected to our local cluster nodes through an encrypted Wireguard tunnel. This networking foundation will enable several advanced configurations in future posts, including:

  • Exposing applications to the public internet through Ingress
  • Setting up automatic DNS management with external-dns
  • Implementing automated TLS certificate provisioning with cert-manager
  • Configuring authentication and access control for your exposed services

Note: This setup is primarily intended for homelab or self-hosted Kubernetes environments. If you’re using a managed Kubernetes service like GKE, EKS, or AKS, this specific configuration is unnecessary as these platforms provide their own load balancing and ingress solutions.

1. Setup Wireguard on Server

First, install the Wireguard package:

apt install -y wireguard

Generate a cryptographic key pair:

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

Configure the wg0 interface by creating /etc/wireguard/wg0.conf

[Interface]
PrivateKey = ***
Address = 10.0.0.1/24
ListenPort = 51820
SaveConfig = true

[Peer]
PublicKey = ***
AllowedIPs = 10.0.0.2/32
PersistentKeepalive = 25

[Peer]
PublicKey = ***
AllowedIPs = 10.0.0.2/32
PersistentKeepalive = 25

Activate and enable the Wireguard service:

systemctl enable --now [email protected]

Verify that the interface is operational:

ip link list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:d7:81:4e brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/none 

As we can see, the wg0 interface is successfully established.

2. Setup Wireguard on Kubernetes Nodes

For our Kubernetes nodes, we’re using MicroOS with K3s. Install the Wireguard tools:

transactional-update pkg in wireguard-tools

Generate key pairs and configure the wg0 interface by creating /etc/wireguard/wg0.conf on each node:

On Node 1:

[Interface]
PrivateKey = ***
Address = 10.0.0.2/24

[Peer]
PublicKey = ***
Endpoint = ***:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

On Node 2:

[Interface]
PrivateKey = ***
Address = 10.0.0.2/24

[Peer]
PublicKey = ***
Endpoint = ***:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

Enable and start the Wireguard interface:

systemctl enable --now [email protected]

Verify connectivity from each node to the server:

ping 10.0.0.1

PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=138 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=67.4 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=68.0 ms

Verify connectivity from the server to each node:

ping 10.0.0.2

PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=71.6 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=68.7 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=68.1 ms
ping 10.0.0.3

PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=81.8 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=78.9 ms
64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=77.9 ms

3. Configure HAProxy Layer 4 LoadBalancer

Install HAProxy on the server:

apt install -y haproxy

Configure HAProxy by editing /etc/haproxy/haproxy.cfg

frontend http_frontend
    bind *:80
    mode tcp
    option tcplog
    default_backend http_backend

backend http_backend
    mode tcp
    balance roundrobin
    option tcp-check
    server node1 10.0.0.2:80 check
    server node2 10.0.0.3:80 check

frontend https_frontend
    bind *:443
    mode tcp
    option tcplog
    default_backend https_backend

backend https_backend
    mode tcp
    balance roundrobin
    option tcp-check
    server node1 10.0.0.2:443 check
    server node2 10.0.0.3:443 check

Restart HAProxy to apply the configuration:

systemctl restart haproxy

Verify that traffic is properly forwarded by sending a request to the server’s public IP address from our local machine:

curl http://$PUBLIC_IP

404 page not found

The “404 page not found” response confirms that traffic is successfully reaching our Kubernetes cluster. This error is expected and actually indicates success, as the request is properly reaching the Kubernetes ingress controller, which doesn’t have any routes configured yet.

Conclusion

We have successfully set up a secure tunnel between our public-facing server and our local Kubernetes cluster nodes using Wireguard. The HAProxy configuration routes all HTTP and HTTPS traffic to our cluster nodes in a load-balanced manner. This setup allows us to expose services from our local Kubernetes cluster to the internet with minimal configuration.

This foundation is critical for the GitOps-driven application deployments we’ll be implementing in upcoming posts in this series. With this networking infrastructure in place, we can now focus on automating application deployments with FluxCD while ensuring they’re securely accessible from the internet. Stay tuned for our next post on configuring Push based deployments with FluxCD, configuring external-dns and cert-manager to work with this setup.