Using an Ingress

Introduction

At the very beginning of this guide, you may remember that I insisted on setting up a DNS address for our server. Why was that so important?

With a kubernetes installation, you are hosting lots of web services. This is a problem as there is only a single port for HTTPS traffic: 443. How do we host multiple services on a single port?

Well this is where an Ingress comes in. An ingress allows you to route traffic to your services based on the URL you look for.

If you’re used to reverse proxies, an ingress is a reverse proxy. It just happens to be configurable through the kubernetes API.

TL;DR

In this article, we will:

  • Create an ingress to serve our web services on port 443
  • Encrypt with a self-signed certificate
  • Use Annotations for IP whitelisting and SSL redirection

Using an Ingress Controller in Rancher

An Ingress controller does not necessarily come with kubernetes, but for k3s, it ships with a traefik ingress controller. Traefik is… fine, for an ingress, but it’s incredibly complicated to do anything beyond a basic route. This is why we replaced traefik with nginx.

Let’s set up an ingress now for our helloworld container. Well in a little bit: Before we do that we need to set up another DNS entry!

Adding DNS entries to openwrt

  • Log into your openWRT router and head to network→hostnames.

  • add an entry, and call it helloworld... Point it to your rancher host’s internal IP.

  • Save and apply.

Using Rancher’s Ingress Controller

  • Log into your rancher management portal. Choose ingresses on the sidebar and choose create

You may notice that there is already an existing ingress: for our rancher web console. You can see that by clearing the namespace filters.

  • Set the Ingress name to helloworld. On the Rules, choose helloworld.. in Request Host. Set your prefix to / (meeting all paths), and choose your helloworld deployment on port 80. Press Create.

  • Test that you can now access your service on port 80 (http://.)

You should also be able to access the same service on https (albeit it won’t automatically redirect yet). the service will default to the Kubernetes Ingress Controller Fake Certificate, a self signed cert that the nginx ingress falls back on.

Success. In fact, now that we have an ingress active, we can unpublish the NodePort on the helloworld deployment if we so choose.

Whitelisting Traffic

As mentioned, Rancher is using nginx in the back to actually proxy the content. This means that everything nginx can do, we can do.

We may be running services through our ingress that we don’t want exposed to the internet. Our Rancher Management ingress is a prime example (you can see that ingress by unchecking the only user namespaces filter at the top). Maybe we want to set a whitelist.

Let’s show an example whitelist using the helloworld ingress.

  • On the ingresses category, choose the helloworld configuration and edit config

  • verify that you get a 403 error when visting your website now (seeing as you just blocked everybody but the server itself)

  • Edit the ingress again, and put in a more sane whitelist. For example, you can put in 192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12 to restrict traffic to local networks only. You could put in your own subnet (10.20.10.0/24 for myself) to restrict just to your subnet. Verify you can access your site again.

Cool! Now you can proxy sensitive websites and lock down their access.

If you want to whitelist the rancher ingress (it’s not a bad idea if you’re forwarding ports to the web), that particular ingress needs to be edited in a special way. That will be covered in the let’s encrypt article)

Forcing HTTPS Redirect

We can access the self-signed encrypted version of helloworld, but we don’t get redirected to it by default. Let’s fix that.

  • head back to ingresses and edit config for helloworld.

Now even if we explicity visit http://helloworld.., we get redirected to https:

Generating Valid Certificates

So far, we have been working entirely with self signed certificates. These are certificates that are both created, and verified, by our local installation of kubernetes. This is fine as long as you don’t mind scary warnings everywhere you visit. However most people would prefer to host services that don’t scare off users. For this we can generate valid certificates with Let’s Encrypt.

What is Let’s Encrypt?

Let’s encrypt is a service ran by a non-profit organization for the express purpose of validating domains and generating certificates. Let’s encrypt allows you to automatically request a certificate for a domain, have them validate that you do indeed own the domain, and provide a valid certificate for your services.

Let’s Encrypt is big: 2 million certificates a day big. Even better, their certificates are trusted by all major browsers. This makes them a fantastic service to set up automatically renewing certificates, even in production environments.

Of course the catch to all this is to validate your domain, you need a domain to validate against! This is where domain registrars come in. Register yourself a domain, it’s cheap to do. I recommend cloudflare (as the DNS challenge example uses cloudflare).

Choose your Own Adventure

Depending on your situation, you will want to use either a DNS challenge or an HTTP challenge. So I wrote a guide for both!

Even if you don’t have a cert-manager compatible DNS provider, you can still use cloudflare. You just need to forward your domain registrar’s nameservers to cloudflare. This is free to do.

  • HTTP Challenges are handy if you don’t have a compatible DNS provider and you are comfortable forwarding ports 80/443 to the internet (proceed with caution). You also need a static, public IP and a public domain name for this to work.