Helm Charts and Jellyfin

Introduction

Where we left off, we had documented our gitea deployment with a kubernetes manifest. That works, but it’s a manual and (honestly) painful process. Instead, we will start deploying workloads using helm charts.

Creating helm charts is a lengthy and complex process, which we will not go into for this section. Instead, we will show how to deploy a helm chart.

Now let’s start diving into the real purpose of a homelab: a streaming media server!

TL;DR

In this article (and the final one of this first series!) we will:

Adding the Helm Repository

k8s@home are a group of people who maintain helm charts for the more… homelabby projects. They are the kubernetes equivalent of linuxserver.io. We will use their charts to deploy jellyfin.

  • In Rancher, head to Apps & Marketplace→Chart Repositories. Create a chart repository.

  • Navigate back to charts and you should have a whole buttload (that’s the technical term) of packages to choose from!

DNS

Another webservice, another dns entry! I am using tv.mydomain.com.au

If you are using Let’s Encrypt with HTTP, instead of DNS, you will want to go generate another certificate for this subdomain in rancher.

Installing the Intel GPU plugin (optional)

If you are following along with this guide, you may be using an intel CPU on your host. If so, you can enable hardware acceleration when transcoding video!

If you are using an AMD CPU, you can perform the task using an (unrelated) helm chart here

  • In the rancher apps and marketplace, choose the intel gpu plugin. Press install.

  • Use the homelab namespace and set the name to intel-gpu-plugin. press next

  • On the next page, press install

Easy!

Importing Media as an NFS Volume (Optional)

If you are wanting jellyfin to be your media server, you probably have some media stored on a NAS. In my case, it’s a shared folder on my synology NAS, with NFS permissions mapped to my rancher host’s IP.

  • I want my jellyfin installation to take advantage of this media. In Rancher, navigate to PersistentVolumes and create a persistent volume

  • Create a NFS Share volume with the name Media. Set the path to your NFS share path and IP to your NFS server IP. Create

The capacity for NFS just specifies minimum capacity. You can use as much as you need

  • Under PersistentVolumeClaims create a new claim. Set the name to jellyfin-media and user an existing volume. Choose media from the dropdown and create.

Installing Jellyfin

Next step is installing Jellyfin.

  • At the apps and marketplace, choose jellyfin. Press install.

  • set the name as jellyfin. Press Next.

Change the following:

  • TZ to your Timezone (Australia/ACT for me)
  • ingress to the following (changing your domain of course)
    enabled: true
    hosts:
      - host: tv.mydomain.com.au
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: mydomain-production
        hosts:
          - tv.mydomain.com.au
  • Config to:
  config:
    enabled: true
    type: pvc
    storageClass: longhorn
    size: 10Gi
  • media to (if you set up your nfs share):
  media:
    enabled: true
    mountPath: /media
    existingClaim: jellyfin-media
  • (Optional) if you are using the intel GPU plugin, add the following to the end:
resources:
  requests:
    gpu.intel.com/i915: 1
  limits:
    gpu.intel.com/i915: 1

You can see the changes here:

These settings are referred to as the values.yaml section of the helm chart. For k8s@home helm charts, there are two separate areas to check for documentation: The “common” values.yaml, and the app specific values.yaml

  • Install! If all goes well, you should get a success message! Navigate to tv.mydomain.com.au and check that you can now set up your media streaming service.

If you also installed the intel gpu plugin, you can use this opportunity to enable VAAPI encoding at the same time (under profile→dashboard→playback):

Conclusion

With the demonstration of helm installs, the first saga of kubernetes is done! You now have the tools you need to recreate a fully functional kubernetes environment and run all of your favourite web services.

If all goes well this will not be the end of the guide. There’s still so much to cover in a full deployment! The future roadmap aims to cover the following:

  • A CI/CD Pipeline (so we can actually use that source control for something)
  • Image Builds within kubernetes
  • Designing Helm Charts
  • Further authentication methods, such as oauth proxying and oidc federation (gitea supports both!)
  • High availability (more hardware will be required for this one) with harvester
  • horizontal scaling (this is a huge topic just on it’s lonesome)

Stay tuned for more guides in the future