Kubernetes - Installing Cilium CNI
Table of Contents
This post is part of our ongoing series on Kubernetes. This post focuses on installing Cilium as the CNI in a K3s cluster. While K3s includes its own default networking stack, Cilium provides an alternative that leverages eBPF for networking, security, and observability. This setup can be useful for scenarios requiring more granular network policies, improved performance, or deeper insights into cluster traffic.
This guide specifically targets installing Cilium in kube-proxy replacement mode, which eliminates the need for the standard kube-proxy component and allows Cilium to handle all service routing more efficiently using eBPF.
1. Disable K3s Components
The first step is to configure /etc/rancher/k3s/config.yaml
to disable the default networking components that would conflict with Cilium:
write-kubeconfig-mode: "0644"
disable-kube-proxy: true
disable-network-policy: true
flannel-backend: none
disable:
- servicelb
- traefik
This configuration:
- Disables
kube-proxy
, as Cilium will replace its functionality - Disables the default
network-policy
implementation = Setsflannel-backend
tonone
to prevent the default CNI from starting - Additionally disables
servicelb
(the default load balancer) andtraefik
(the default ingress controller)
Next step is to remove the traefik manifest from the auto-deploy directory:
rm /var/lib/rancher/k3s/server/manifests/traefik.yaml`
After applying these changes, restart the K3s service to apply the configuration:
systemctl restart k3s
For a complete clean slate, it’s recommended to reboot all cluster nodes. This ensures any lingering network configurations are cleared.
2. Install Cilium
Now we’ll proceed with installing the core Cilium components using the cilium CLI. This approach provides a streamlined installation experience compared to manually applying manifests:
cilium install \
--version 1.17.1 \
--set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16" \ --set=kubeProxyReplacement=true \
--set k8sServiceHost=192.168.1.19 \
--set k8sServicePort=6443
This command configures Cilium with the following critical parameters:
- Retains the IPv4 CIDR range of K3s (10.42.0.0/16) for pod addressing
- Enables full kube-proxy replacement mode
- Specifies the Kubernetes API server endpoint (required for kube-proxy replacement mode since internal service discovery depends on it)
3. Configure Cilium with Flux
To ensure our Cilium deployment remains consistent and follows GitOps principles, we’ll configure it through Flux CD. This allows us to declaratively manage the Cilium configuration and leverage automated reconciliation.
Create cluster/default/cilium.yaml
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cilium
namespace: kube-system
spec:
interval: 10m
url: https://helm.cilium.io
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cilium
namespace: kube-system
spec:
releaseName: cilium
interval: 10m
chart:
spec:
chart: cilium
version: "1.17.1"
interval: 10m
sourceRef:
kind: HelmRepository
name: cilium
namespace: kube-system
values:
operator:
rollOutPods: true
nodeIPAM:
enabled: true
kubeProxyReplacement: true
ipam:
operator:
clusterPoolIPv4PodCIDRList: ["10.42.0.0/16"]
defaultLBServiceIPAM: nodeipam
nodePort:
enabled: true
k8sServiceHost: "192.168.1.19"
k8sServicePort: "6443"
envoy:
enabled: false
hubble:
relay:
enabled: true
ui:
enabled: true
This configuration includes several important customizations:
- Implements
nodeIPAM
mode to associate Node IPs withLoadBalancer
services, crucial for clusters without an external load balancer - Enables
nodePort
support to expose services on node ports ( which to be consumed by Ingress controller for traffic routed via the cloud server setup done in Kubernetes - Routing external traffic to local Kubernetes cluster ) - Sets
defaultLBServiceIPAM
tonodeipam
- Disables Envoy in
DaemonSet
mode since the embedded proxy mode is sufficient for smaller clusters, reducing resource consumption - Enables Hubble (with both relay and UI components) for enhanced observability and traffic visualization
Lets commit and push these change to Git repository, and Flux will automatically apply the configuration to the cluster.
4. Verify Setup
Once the deployment has been reconciled, validate the Cilium installation with the following command:
cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: OK
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-operator Running: 2
clustermesh-apiserver
hubble-relay Running: 1
hubble-ui Running: 1
Cluster Pods: 24/24 managed by Cilium
Helm chart version: 1.17.1
Image versions cilium quay.io/cilium/cilium:v1.17.1@sha256:8969bfd9c87cbea91e40665f8ebe327268c99d844ca26d7d12165de07f702866: 2
cilium-operator quay.io/cilium/operator-generic:v1.17.1@sha256:628becaeb3e4742a1c36c4897721092375891b58bae2bfcae48bbf4420aaee97: 2
hubble-relay quay.io/cilium/hubble-relay:v1.17.1@sha256:397e8fbb188157f744390a7b272a1dec31234e605bcbe22d8919a166d202a3dc: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b: 1
hubble-ui quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6: 1
To verify that kube-proxy replacement is functioning correctly, use the following command to inspect the detailed status:
kubectl -n kube-system exec ds/cilium -- cilium-dbg status --verbose
...
KubeProxyReplacement Details:
Status: True
Socket LB: Enabled
Socket LB Tracing: Enabled
Socket LB Coverage: Full
Devices: enp1s0 192.168.1.19 2402:a00:402:60a4:cf53:64d7:94f3:90bd 2402:a00:402:60a4:5054:ff:fec2:f7c8 fe80::5054:ff:fec2:f7c8 (Direct Routing), wg0 10.0.0.2
Mode: SNAT
Backend Selection: Random
Session Affinity: Enabled
Graceful Termination: Enabled
NAT46/64 Support: Disabled
XDP Acceleration: Disabled
Services:
- ClusterIP: Enabled
- NodePort: Enabled (Range: 30000-32767)
- LoadBalancer: Enabled
- externalIPs: Enabled
- HostPort: Enabled
Annotations:
- service.cilium.io/node
- service.cilium.io/src-ranges-policy
- service.cilium.io/type
BPF Maps: dynamic sizing: on (ratio: 0.002500)
...
Encryption: Disabled
Cluster health: 2/2 reachable (2025-03-11T08:57:33Z)
Name IP Node Endpoints
192.168.1.19 (localhost):
Host connectivity to 192.168.1.19:
ICMP to stack: OK, RTT=255.745µs
HTTP to agent: OK, RTT=268.478µs
Endpoint connectivity to 10.42.1.34:
ICMP to stack: OK, RTT=211.23µs
HTTP to agent: OK, RTT=432.791µs
192.168.1.24:
Host connectivity to 192.168.1.24:
ICMP to stack: OK, RTT=196.433µs
HTTP to agent: OK, RTT=574.349µs
Endpoint connectivity to 10.42.0.68:
ICMP to stack: OK, RTT=510.047µs
HTTP to agent: OK, RTT=1.232687ms
Modules Health:
...
This confirms that all necessary service types are correctly supported by Cilium’s kube-proxy replacement functionality.
5. Accessing Hubble UI
Now that Cilium is successfully installed with Hubble enabled, lets access the Hubble UI to visualize and analyze your cluster network traffic. By default, the Hubble UI is only accessible from within the cluster, but cilium cli supports shortcut to portforward and access it.
cilium hubble ui
Opening "http://localhost:12000" in your browser...
The interface provides detailed visualizations of pod-to-pod communication, service dependencies, and network policies enforcement.
References
- Cilium installation in K3s - https://docs.cilium.io/en/stable/installation/k3s/
- Node IPAM LB - https://docs.cilium.io/en/stable/network/node-ipam/