Rancher K3s: Kubernetes on Proxmox Containers | by Garrett Mills | Apr, 2022

Utilizing LXD containers and K3s to spin up a K8s cluster with NGINX Ingress Controller

For a very long time now, I’ve self-hosted most of my on-line companies like calendar, contacts, e-mail, cloud file storage, my web site, &c. The present iteration of my setup depends on a collection of Ansible playbooks that set up the entire numerous functions and configure them to be used.

This has been actually secure, and has labored fairly nicely for me. I deploy the functions to a set of LXD containers (learn: lightweight Linux VMs) on Proxmox, a free and open-source hypervisor with a wonderful administration interface.

Lately, nonetheless, I’ve been re-learning Docker and the advantages of deploying functions utilizing containers. A number of the large ones are:

  • Assured, reproducible environments. The applying ships with its dependencies, able to run.
  • Portability. Assuming your surroundings helps the container runtime, it helps the appliance.
  • Infrastructure-as-code. Very like Ansible playbooks, Docker lends itself nicely to managing the container surroundings utilizing code, which could be tracked and versioned.

So, I’ve determined to embark on the journey of transitioning my bare-Linux Ansible playbooks to a set of Kubernetes deployments.

Nevertheless, there are nonetheless some issues I like about Proxmox that I’m not keen to surrender. For one, the flexibility to virtualize bodily machines (like my router or entry level administration portal) that may’t be simply containerized. Being able emigrate “bodily” OS installs between servers once I must do upkeep on the hosts is tremendous helpful.

So, I will probably be putting in Kubernetes on Proxmox, and I wish to do it on LXD containers.

I’m going to deploy a Kubernetes cluster utilizing Rancher’s K3s distribution on high of LXD containers.

K3s is a light-weight, production-grade Kubernetes distribution that simplifies the setup course of by coming pre-configured with DNS, networking, and different instruments out of the field. K3s additionally makes it pretty painless to affix new employees to the cluster. This, mixed with the comparatively small scale of my deployment, makes it a reasonably straightforward alternative.

LXD containers, alternatively, might sound a little bit of an odd alternative. Almost each different article I discovered deploying K8s on Proxmox did so utilizing full-fat digital machines, reasonably than containers. That is actually the lower-friction route, because it’s procedurally the identical as putting in it on bodily hosts. I went with LXD containers for 2 major causes:

  1. LXD containers are quick. Like, virtually as quick as naked steel. As a result of LXD containers are virtualized on the kernel stage, they’re much lighter than conventional VMs. As such, they boot practically immediately, run at practically the identical velocity because the host kernel, and are a lot simpler to reconfigure with extra RAM/disk area/CPU cores on the fly.
  2. LXD containers are smaller. As a result of the containers run on the kernel of the host, they should include a a lot smaller set of packages. This makes them require a lot much less disk area out of the field (and, subsequently, makes them simpler emigrate).

So, to start out out, I’m going to create 2 containers: one management node, and one employee node.

I’m going to imagine that you simply (1) have a Proxmox server up and operating, (2) have a container template obtainable on Proxmox, and (3) you’ve got some sort of NFS file server.

This final one is vital since we’ll be giving our containers a comparatively small quantity of disk area. So, any volumes wanted by Kubernetes pods could be created as NFS mounts.

You’ll additionally wish to arrange kubectl and helm instruments in your native machine.

As a result of our LXD containers want to have the ability to run Docker containers themselves, we have to do a little bit of extra configuration out of the field to offer them correct permissions.

The method for organising the two containers is just about an identical, so I’m solely going to undergo it as soon as.

Within the Proxmox UI, click on “Create CT.” Ensure you test the field to point out superior settings.

Make certain to uncheck “Unprivileged container.”

Fill within the particulars of the container. Make certain to uncheck the “Unprivileged container” checkbox. On the following display, choose your template of alternative. I’m utilizing a Rocky Linux 8 image.

I elected to offer every container a root disk measurement of 16 GiB, which is greater than sufficient for the OS and K3s to run, so long as we don’t put any volumes on the disk itself.

The CPU and Reminiscence values are actually as much as no matter you’ve got obtainable on the host, and the workloads you plan to run in your K8s cluster. For mine, I gave 4 vCPU cores and 4 GiB of RAM per container.

For the community configuration, you should definitely set a static IP tackle for every node. Moreover, for those who use a particular inside DNS server (which I extremely suggest!), you must configure that on the following web page.

Lastly, on the final web page, ensure that to uncheck the “Begin after created” checkbox after which click on end. Proxmox will create the container.

Now, we have to tweak a number of issues under-the-hood to offer our containers correct permissions. You’ll must SSH into your Proxmox host because the root person to run these instructions.

Within the /and many others/pve/lxc listing, you’ll discover information known as XXX.conf, the place XXX are the ID numbers of the containers we simply created. Utilizing your textual content editor of alternative, edit the information for the containers we created so as to add the next strains:

lxc.apparmor.profile: unconfined
lxc.cgroup.gadgets.permit: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

Be aware: It’s vital that the container is stopped while you attempt to edit the file, in any other case Proxmox’s community filesystem will forestall you from saving it.

So as, these choices (1) disable AppArmor, (2) permit the container’s cgroup to entry all gadgets, (3) forestall dropping any capabilities for the container, and (4) mount /proc and /sys as read-write within the container.

Subsequent, we have to publish the kernel boot configuration into the container. Usually, this isn’t wanted by the container because it runs utilizing the host’s kernel, however the Kubelet makes use of the configuration to find out numerous settings for the runtime, so we have to copy it into the container. To do that, first begin the container utilizing the Proxmox internet UI, then run the next command on the Proxmox host:

pct push <container id> /boot/config-$(uname -r) /boot/config-$(uname -r)

Lastly, in every of the containers, we have to be sure that /dev/kmsg exists. Kubelet makes use of this for some logging features, and it doesn’t exist within the containers by default. For our functions, we’ll simply alias it to /dev/console. In every container, create the file /usr/native/bin/conf-kmsg.sh with the next contents:

#!/bin/sh -e
if [ ! -e /dev/kmsg ]; then
ln -s /dev/console /dev/kmsg
fi
mount --make-rshared /

This script symlinks /dev/console as /dev/kmsg if the latter doesn’t exist. Lastly, we’ll configure it to run when the container begins with a SystemD one-shot service. Create the file /and many others/systemd/system/conf-kmsg.service with the next contents:

[Unit]
Description=Make certain /dev/kmsg exists
[Service]
Sort=easy
RemainAfterExit=sure
ExecStart=/usr/native/bin/conf-kmsg.sh
TimeoutStartSec=0
[Install]
WantedBy=default.goal

Lastly, allow the service by operating the next:

chmod +x /usr/native/bin/conf-kmsg.sh
systemctl daemon-reload
systemctl allow --now conf-kmsg

Now that we’ve obtained the containers up and operating, we’ll arrange Rancher K3s on them. Fortunately, Rancher deliberately makes this gorgeous straightforward.

Beginning on the management node, we’ll run the next command to setup K3s:

curl -fsL https://get.k3s.io | sh -s - --disable traefik --node-name management.k8s

Just a few notes right here:

  • K3s ships with a Traefik ingress controller by default. This works nice, however I desire to make use of the industry-standard NGINX ingress controller as an alternative, so we’ll set that up manually.
  • I’ve specified the node title manually utilizing the --node-name flag. This might not be obligatory, however I’ve had issues prior to now with K3s doing a reverse-lookup of the hostname from the IP tackle, leading to totally different node names between cluster restarts. Specifying the title explicitly avoids that challenge.

If all goes nicely, you must see an output just like:

As soon as that is finished, you’ll be able to copy the /and many others/rancher/k3s/k3s.yaml as ~/.kube/config in your native machine and you must have the ability to see your new (admittedly single node) cluster utilizing kubectl get nodes!

Be aware: you could want to regulate the cluster tackle within the config file from 127.0.0.1 to the precise IP/area title of your management node.

Now, we have to be part of our employee node to the K3s cluster. That is additionally fairly easy, however you’ll want the cluster token so as to be part of the node.

You will discover this by operating the next command on the management node:

cat /var/lib/rancher/k3s/server/node-token

Now, on the employee node run the next command to arrange K3s and be part of the present cluster:

curl -fsL https://get.k3s.io | K3S_URL=https://<management node ip>:6443 K3S_TOKEN=<cluster token> sh -s - --node-name worker-1.k8s

Once more, word that we specified the node title explicitly. As soon as this course of finishes, you must now see the employee node seem in kubectl get nodes:

You possibly can repeat this course of for any extra employee nodes you wish to be part of to the cluster sooner or later.

At this level, we have now a practical Kubernetes cluster, nonetheless as a result of we disabled Traefik, it has no ingress controller. So, let’s set that up now.

I used the ingress-nginx/ingress-nginx Helm chart to arrange the NGINX ingress controller. To do that, we’ll add the repo, load the repo’s metadata, then set up the chart:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo replace
helm set up nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true

Right here, the controller.publishService.enabled setting tells the controller to publish the ingress service IP addresses to the ingress assets.

After the chart completes, you must see the assorted assets seem in kubectl get all output. (Be aware that it could take a pair minutes for the controller to come back on-line and assign IP addresses to the load balancer.)

We are able to take a look at that the controller is up and operating by navigating to any of the node’s addresses in an online browser:

On this case, we count on to see the 404, since we haven’t configured any companies to ingress by means of NGINX. The vital factor is that we obtained a web page served by NGINX.

Now, we have now a fully-functional Rancher K3s Kubernetes cluster, and the NGINX Ingress Controller configured and able to use.

I’ve discovered this cluster to be very easy to keep up and scale. If it’s essential add extra nodes, simply spin up one other LXD container (probably on one other bodily host, probably not) and simply repeat the part to affix the employee to the cluster.

I’m planning on doing a number of extra write-ups chronicling my journey to study and transition to Kubernetes, so keep tuned for extra like this. The subsequent step on this course of is to configure cert-manager to robotically generate Let’s Encrypt SSL certificates and deploy a easy utility to our cluster.

This publish initially appeared on my weblog, here.

More Posts