Using Tailscale Certificates in Kubernetes

As I mentioned earlier , I’m a huge fan of Tailscale . Last year, they added a beta ability to issue X.509v3 certificates (via Let’s Encrypt ) to systems on your Tailnet. As they explain in a blog post , they do this by giving you a (somewhat randomish) FQDN, and then doing the DNS-01 challenge dance for you.

To get the certificate:

$ sudo tailscale cert $FQDN

Weirdly, if you don’t provide the domain, it tells you what domain to use, but it’s unclear if you can use a domain that isn’t the one of the host you’re on. Perhaps you can call this from another place to get the right certificate.

Anyway, that will dump .crt and .key files into the current working directory. You can then load them into the appropriate k8s secret:

$ kubectl create secret tls tailscale-tls --namespace test --key $FQDN.key --cert $FQDN.crt
secret/tailscale-tls created

Your secret needs to be in the same namespace as where you’re putting the Ingress Controller. I’m trying to figure out a way to make sure that the key never hits the disk, likely involving some shell pipe magic.

Note: The normal behavior is to run an Ingress Controller on every node (in a DaemonSet ). For my cluster, this means running the ingress-nginx controller. In this setup, you are using only one of the ingress controllers (because you’re sending the traffic to one specific node, and not balancing it). If you were balancing between all the nodes (in my case, 3), then you wouldn’t be able to use a Tailscale managed certificate, but would need a normal one. You might be able to get something managed by cert-manager and Let’s Encrypt. Of course that can be complicated when managing internal-only routes. I’ve opened a feature request for Tailscale, and we’ll see where that goes.

Back to the example, though. For this, we’re going to use Google’s sample container for the hello-app. This presumes you have an ingress controller already deployed, and that you’re going to deploy into a namespace named test. First, we have the Deployment itself, which runs 3 copies of the demo container and exposes the service on port 8080:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-app
  namespace: test
spec:
  selector:
    matchLabels:
      app: hello
  replicas: 3
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
        - name: hello
          image: "gcr.io/google-samples/hello-app:2.0"
          ports:
            - containerPort: 8080

Next, we’ll expose it as a Service inside the cluster on a dedicated cluster IP:

apiVersion: v1
kind: Service
metadata:
  name: hello-service
  namespace: test
  labels:
    app: hello
spec:
  type: ClusterIP
  selector:
    app: hello
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP

Now we have a service we can access internally, but we’re still unencrypted. The next step is to expose it via an Ingress Controller, and hook up the TLS certificate:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-app-ingress
  namespace: test
spec:
  ingressClassName: public
  tls:
    - hosts:
      - flora.ermine-woodpecker.ts.net
      secretName: tailscale-tls
  rules:
  - host: "$FQDN"
    http:
      paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: hello-service
              port:
                number: 80

And now you can access it in your browser with everything linked up by going to https://$FQDN/, and you should see something like this:

Hello, world!
Version: 2.0.0
Hostname: hello-app-5c554f556c-hd7mn

The Hostname will be different for you, but if you refresh, you’ll see the last segment changes among 3 different Pods.