Kubernetes Manifests

Introduction

Where we left off, we had just finished deploying our gitea platform for holding our configuration data. However, there’s a snag: we had to do a lot of manual work in rancher to make this happen. How would we deploy in a “docker-compose” style?

Well there’s a secret we haven’t addressed: everything in kubernetes is infrastructure as code. All of it! We will go into how to utilize that below.

TL;DR

For this article, we will:

  • Generate a repository for our gitea deployment and copy out the kubernetes manifests
  • demonstrate a redeployment using just kubernetes manifests

YAML as far as the eye can see

Early on in this guide, I pointed out that just about every menu has a view as yaml option attached to it. Let’s visit that now. Actually, let’s have a look at our ingress that we just created:

As it turns out, everything we entered in our form has a corresponding YAML entry. In fact that’s exactly what rancher does when we’re using the creation forms: it’s just creating a special YAML file for kubernetes to use!

We’ve also used kubernetes manifests already. cert-manager has components (referred to as custom resource definitions or CRDs) that use kubernetes manifests, and Rancher doesn’t have a mechanism to present those manifests as a form. When we went through our Let’s Encrypt process, we were deploying kubernetes manifests directly!

Kubernetes uses YAML to track everything: configuration, status, unique IDs, everything. We can take advantage of this: by stripping out everything but the configuration data, we can generate the kubernetes equivalent of a docker compose file.

Generating a Kubernetes Manifest with Rancher

Let’s use a practical example: Let’s clone our gitea ingress:

Now that we’ve cloned the ingress, let’s press the edit as YAML button:

We can now strip the extra comments, set the name in the metadata, and copy out that whole YAML file:

That file is what we call a kubernetes manifest. We can upload this file directly into rancher to recreate our ingress, no manual work needed.

Creating a Full Set of Manifests

Of course this manifest is only for an Ingress: There’s a heck of a lot more that goes into a deployment than an ingress. There’s also the persistent volume claims, the clusterIP service, the secret, and the deployment itself.

When uploading multiple files, we can chain together manifests using the separator ---.

Take a deep breath

Alright, here’s the manifest for our deployment:

The Service:

apiVersion: v1
kind: Service
metadata:
  name: 'gitea'
  annotations:
  labels:
    {}
  namespace: homelab
spec:
  selector:
    workload.user.cattle.io/workloadselector: apps.deployment-homelab-gitea
  type: ClusterIP
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
    - name: http
      port: 3000
      protocol: TCP
      targetPort: 3000
  sessionAffinity: None

The Ingress (note that the domain has been replaced with mydomain):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: 'gitea'
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
  labels:
    {}
  namespace: homelab
spec:
  rules:
    - host: git.mydomain.com.au
      http:
        paths:
          - backend:
              service:
                name: gitea
                port:
                  number: 3000
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - git.mydomain.com.au
      secretName: mydomain-production
  backend:
    {}

and the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: 'gitea'
  namespace: homelab
spec:
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: apps.deployment-homelab-gitea
  template:
    metadata:
      labels:
        workload.user.cattle.io/workloadselector: apps.deployment-homelab-gitea
    spec:
      containers:
        - env:
            - name: POSTGRES_DB
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__NAME
                  name: gitea
                  optional: false
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__USER
                  name: gitea
                  optional: false
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__PASSWD
                  name: gitea
                  optional: false
          image: postgres:13
          imagePullPolicy: Always
          name: postgres
          startupProbe:
            failureThreshold: 3
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 5432
            timeoutSeconds: 1
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: gitea-db-vol
              subPath: db
          _active: true
          resources: {}
        - env:
            - name: GITEA__database__DB_TYPE
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__DB_TYPE
                  name: gitea
                  optional: false
            - name: GITEA__database__HOST
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__HOST
                  name: gitea
                  optional: false
            - name: GITEA__database__NAME
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__NAME
                  name: gitea
                  optional: false
            - name: GITEA__database__PASSWD
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__PASSWD
                  name: gitea
                  optional: false
            - name: GITEA__database__USER
              valueFrom:
                secretKeyRef:
                  key: GITEA__database__USER
                  name: gitea
                  optional: false
            - name: USER_GID
              valueFrom:
                secretKeyRef:
                  key: USER_GID
                  name: gitea
                  optional: false
            - name: USER_UID
              valueFrom:
                secretKeyRef:
                  key: USER_UID
                  name: gitea
                  optional: false
            - name: GITEA__server__ROOT_URL
              valueFrom:
                secretKeyRef:
                  key: GITEA__server__ROOT_URL
                  name: gitea
                  optional: false
            - name: GITEA__service__DISABLE_REGISTRATION
              valueFrom:
                secretKeyRef:
                  key: GITEA__service__DISABLE_REGISTRATION
                  name: gitea
                  optional: false
          image: gitea/gitea:latest
          imagePullPolicy: Always
          name: gitea
          ports:
            - containerPort: 3000
              name: http
              protocol: TCP
              _serviceType: ClusterIP
          startupProbe:
            failureThreshold: 3
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /data
              name: gitea-data-vol
          resources: {}
      affinity:
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
      terminationGracePeriodSeconds: 30
      volumes:
        - name: gitea-db-vol
          persistentVolumeClaim:
            claimName: gitea-db
        - name: gitea-data-vol
          persistentVolumeClaim:
            claimName: gitea-data
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: Recreate

My God. Definitely not a docker compose file.

The secrets kubernetes manifest is not included (and shouldn’t be included) as sensitive data is stored in that manifest. Typically if you do want to back it up, it will not be in the same location as your source control.

The persistent volume claims are also not recreated here. We want to bind to the PVCs we have already generated, so we can preserve our data even if we delete the deployment.

Using Kubernetes Manifests

Kubernetes manifests can simply be used by uploading the YAML directly in rancher. This can be done from any menu (the YAML defines where it actually goes)

Here I will delete our deployment, and recreate the deployment using the same deployment manifest from above:

I am Drowning in YAML

If you think that this is the opposite of simple, you are right. This deployment style is way more verbose than docker compose. Docker compose is actually an incredibly elegant deployment language: aside from the glaring fact that it doesn’t scale.

Luckily there is a solution. Reproducible deployments are capable of being deployed with helm charts. You’ve actually been using helm charts for some time: rancher, cert-manager, nginx-ingress-controller, longhorn, and rancher-backup have all been deployed with helm charts. The apps & marketplace sidebar menu is all helm charts!

Helm charts are an absolute nightmare to design (that’s out of scope of this guide). However, when designed properly, they are super simple to deploy. We will demonstrate this next in helm charts and jellyfin.