Kubernetes - Setup Cert Manager for Automated TLS Management

Table of Contents

This post is part of our ongoing series on Kubernetes infrastructure management. In this installment, we’re focusing on setting up Cert Manager, a critical component for automating TLS certificate management. Although we’re working on a local Kubernetes cluster, we’ve implemented prerequisites in Kubernetes - Routing external traffic to local Kubernetes cluster to enable access to our internal cluster via public IP address, as well as configuring DNS automation with Kubernetes - Setup External DNS.

1. Setup Cert Manager

Let’s configure Cert Manager by creating cluster/default/cert-manager.yaml. This declarative configuration establishes:

  • A dedicated Cert Manager namespace
  • The Jetstack Helm Repository reference
  • The Cert Manager Helm Release with appropriate configuration
---
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: jetstack
  namespace: cert-manager
spec:
  interval: 10m
  url: https://charts.jetstack.io
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  releaseName: cert-manager
  interval: 10m
  chart:
    spec:
      chart: cert-manager
      version: "1.16"
      interval: 10m
      sourceRef:
        kind: HelmRepository
        name: jetstack
        namespace: cert-manager
  values:
    crds:
      enabled: true
    prometheus:
      enabled: true
      podmonitor:
        enabled: true

Next, we’ll define a ClusterIssuer that integrates with Let’s Encrypt to provide our certificates. Create this in cluster/default/cluster-issuer.yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: cluster-issuer
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: ****
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-default
    solvers:
    - http01:
        ingress:
          ingressClassName: traefik

After applying these configurations, let’s verify that the necessary secret keys have been generated and check for any potential errors:

kubectl -n cert-manager get secrets

NAME                                 TYPE                 DATA   AGE
cert-manager-webhook-ca              Opaque               3      5m31s
letsencrypt-default                  Opaque               1      3m39s
sh.helm.release.v1.cert-manager.v1   helm.sh/release.v1   1      6m1s

We can confirm that the secret has been successfully generated. Now, let’s examine the logs for further diagnostics:

kubectl -n cert-manager logs pods/cert-manager-84d588f4c6-gj6d4

I0228 12:33:21.941719       1 setup.go:113] "generating acme account private key" logger="cert-manager.controller" resource_name="cluster-issuer" resource_namespace="" resource_kind="ClusterIssuer" resource_version="v1" related_resource_name="letsencrypt-default" related_resource_namespace="cert-manager" related_resource_kind="Secret"
I0228 12:33:22.042104       1 setup.go:225] "ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" logger="cert-manager.controller" resource_name="cluster-issuer" resource_namespace="" resource_kind="ClusterIssuer" resource_version="v1" related_resource_name="letsencrypt-default" related_resource_namespace="cert-manager" related_resource_kind="Secret"
I0228 12:33:23.515859       1 setup.go:315] "verified existing registration with ACME server" logger="cert-manager.controller" resource_name="cluster-issuer" resource_namespace="" resource_kind="ClusterIssuer" resource_version="v1" related_resource_name="letsencrypt-default" related_resource_namespace="cert-manager" related_resource_kind="Secret"
I0228 12:33:23.516047       1 conditions.go:96] Setting lastTransitionTime for Issuer "cluster-issuer" condition "Ready" to 2025-02-28 12:33:23.515998646 +0000 UTC m=+120.676709968
I0228 12:33:23.526676       1 setup.go:208] "skipping re-verifying ACME account as cached registration details look sufficient" logger="cert-manager.controller" resource_name="cluster-issuer" resource_namespace="" resource_kind="ClusterIssuer" resource_version="v1" related_resource_name="letsencrypt-default" related_resource_namespace="cert-manager" related_resource_kind="Secret"
I0228 12:33:27.046621       1 setup.go:208] "skipping re-verifying ACME account as cached registration details look sufficient" logger="cert-manager.controller" resource_name="cluster-issuer" resource_namespace="" resource_kind="ClusterIssuer" resource_version="v1" related_resource_name="letsencrypt-default" related_resource_namespace="cert-manager" related_resource_kind="Secret"

The logs indicate that our configuration is functioning correctly, with the ACME account successfully registered with Let’s Encrypt.

2. Validate Setup

To validate our Cert Manager implementation, let’s enhance our existing sample application from the Kubernetes - Setup External DNS guide by adding TLS configuration. We’ll modify apps/sample-app/ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sample-app-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "cluster-issuer"
    external-dns.alpha.kubernetes.io/target: "******"
spec:
  tls:
  - hosts:
    - sample-app.**
    secretName: sample-app-tls
  rules:
    - host: sample-app.***
      http:
        paths:
          - pathType: Prefix
            backend:
              service:
                name: sample-app-service
                port:
                  number: 80
            path: /

After committing these changes, let’s verify that HTTPS access to our application is working correctly:

curl https://sample-app.***

Greetings From K8S App : Version 2

Now, let’s perform a deeper inspection of the certificate to ensure it’s properly issued and configured:

openssl s_client -showcerts -connect sample-app.***:443

Connecting to 15.235.210.126
CONNECTED(00000003)
depth=2 C=US, O=Internet Security Research Group, CN=ISRG Root X1
verify return:1
depth=1 C=US, O=Let's Encrypt, CN=R11
verify return:1
depth=0 CN=sample-app.***.email
verify return:1
---
Certificate chain
 0 s:CN=sample-app.***.email
   i:C=US, O=Let's Encrypt, CN=R11
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Feb 28 11:44:22 2025 GMT; NotAfter: May 29 11:44:21 2025 GMT

The output confirms that SSL/TLS requests are working correctly, with a valid certificate issued by Let’s Encrypt that will remain valid for approximately 90 days.

3. Monitoring in Grafana

In our previous post Kubernetes GitOps with FluxCD - Part 4 - Helm Chart Automation - Kube Prometheus Stack we deployed the Kube Prometheus Stack which can automatically collect metrics from Cert Manager since we configured our Helm release with podmonitor enabled. Let’s now establish comprehensive certificate monitoring by integrating a specialized Grafana dashboard from the Cert Manager Mixin project.

We’ll begin by downloading the pre-configured dashboard JSON:

curl --output cert-manager.json https://gitlab.com/uneeq-oss/cert-manager-mixin/-/raw/master/dashboards/cert-manager.json?ref_type=heads

Next, we’ll create a ConfigMap with the grafana_dashboard: 1 label, which will trigger automatic dashboard integration. Create dashboards/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: monitoring
configMapGenerator:
  - name: monitoring-grafana-dashboards
    files:
      - cert-manager.json
    options:
      labels:
        grafana_dashboard: "1"

Finally, we’ll define a Flux Kustomization to manage this dashboard resource by creating cluster/default/dashboards.yaml

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: dashboards
  namespace: monitoring
spec:
  interval: 5m
  path: ./dashboards
  prune: true
  retryInterval: 2m
  sourceRef:
    kind: GitRepository
    name: flux-system
    namespace: flux-system
  targetNamespace: monitoring
  timeout: 3m
  wait: true

After committing these changes, let’s verify that the ConfigMap has been generated successfully:

kubectl -n monitoring get configmaps 

NAME                                                      DATA   AGE
monitoring-grafana-dashboards-9dbtc2728b                  1      37s

To access and validate the Grafana dashboard, we’ll establish a port-forward to the Grafana service:

kubectl -n monitoring port-forward service/kube-prometheus-stack-grafana 3000:80

Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

The dashboard allows us to comprehensively monitor certificate lifecycle events, including issuance, renewal schedules, and potential validation errors – enabling proactive certificate management.

What next ?

Future posts will explore advanced Kubernetes and GitOps patterns with FluxCD, including:

  • Push based reconciliation triggers with Webhook receivers for FluxCD

Stay tuned for each of these topics.

References