Hello World with Caddy


One of the easiest containers to deploy is a simple, static web server, such as nginx or httpd or, in this case, caddy. So let’s do that now!


In this section, we will:

  • Create a deployment using the caddy docker image
  • expose that deployment on a NodePort
  • Show how to navigate a deployed container, including opening a shell inside the container

Creating a Namespace

Right off the bat, you can see there are already quite a few resources in use. Most of those are from the helm charts we used to deploy cert-manager and rancher: They both run inside the context of k3s.

Speaking of namespaces, let’s make a new one to work in. Navigate to Cluster→Projects/Namespaces and press Create Namespace on Project:Default

Create a new namespace called homelab (or whatever you want your namespace to be). Press create.

You can optionally filter by the homelab namespace in the top left to cut down on the noise.

Creating your first Deployment

Let’s head over to workload on the left navigation bar. Press create.

It’s worth stopping here and reading what all the different types of workloads are available. Most of these options fit special use cases (for example, rancher is deployed as a DaemonSet). Most of the time we will instead be creating a Deployment. Choose that option now.

Once you’re in the creation dialogue, feel free to browse through the options. Most of them we can leave at the defaults or omit entirely. A couple distinct differences from docker can be immediately apparent:

  • We have the option to define multiple containers in a single deployment. This is useful when you have tightly paired microservices (or in smaller deployments, a container and it’s db for example). You can deploy them at the same time, and they can talk to eachother over the localhost network interface. When multiple containers are deployed this way, it is referred to as a pod.
  • We can determine all sorts of limitations and auto scaling options. We can also set a replica option, where we can deploy multiple instances of the same container. This is useful when horizontally scaling.
  • We can also define a Health Check to determine if a container has launched successfully, and can even automatically restart a container that fails a periodic check.

Deploying Caddy

It’s worth having a browse, but we’ll ignore the vast majority of these options for now.

  • Set the name of the deployment as helloworld
  • Choose the container image as caddy:latest. This will pull the linked image from dockerhub
  • Let’s also set port 80 to expose, as a NodePort with the value of 30000.

A NodePort exposes a service on all nodes present. A NodePort must use a port number between 30000 and 32768

You can see these steps below:

If all goes well, you should be able to navigate to http://:30000 and see a functioning webserver!

Monitoring your Deployment

Once Deployed, we can click on our helloworld deployment and get a dashboard just for this deployment. We also have an option to view the logs or execute a shell. Let’s choose execute shell

You will see “Edit YAML” and “Download YAML” sprinkled everywhere in rancher. This is because everything you do in rancher is defined by a YAML config somewhere (which is also how kubernetes works). You can download, upload, and inspect the raw YAML at any stage. Some of the less managed categories on the sidebar only have the option to edit raw YAML.

We should get a command shell that’s running inside the active container. Cool! We can test this by running the following inside the shell:

echo "<h1>hello world!</h1>" > /usr/share/caddy/index.html

Now if we refresh our web server, we get our modified web page!


Persistent Storage

We still have a problem: If we refresh our deployment, any persistent data gets destroyed. At the moment this includes our website! We need a way to store persistent data. We will cover this next in distributed filesystems