Kubernetes — Different Ways of Deploying a Sample RESTful API Application | by Piotr Ostrowski | May, 2022

To maintain your code versatile

Container ship by dendoktoor on Pixabay

Let’s assume there may be an API able to be deployed, and one decides to make use of the industry-leading orchestrating software program — Kubernetes.

Originating from Google, Kubernetes in Greek means helmsman or pilot, and it serves precisely that function — commanding, on this case, a fleet of containers. It makes scaling huge functions doable and saves one from the widespread pitfalls of offering companies to the plenty.

Though it is extremely logical at first and it’s straightforward to get began, it could be a bit troublesome to allow entry to the cluster with out port forwarding. Personally, I’m not an enormous fan of port forwarding. Thus, even creating native functions, I stand by creating them to make them able to be deployed into manufacturing with none breaking modifications.

The core ideas are straightforward to understand. However since operating the “actual” — or, extra formally, the manufacturing cluster — requires a spread of parts, more often than not, one will get began with the event atmosphere alternate options, like minikube, microk8s or the built-in Docker Desktop Kubernetes engine.

Usually, there are a number of methods of getting site visitors into the Kubernetes cluster, principally relying on the place it’s to be deployed.

The manifests for every of the choices may be discovered here.

kubectl create deployment [deployment] --image=piotrostr/pong
kubectl expose deployment [deployment]

It may possibly then be forwarded a port from the node:

kubectl port-forward service/[deployment] [port-host]:[port-container]

This might probably be helpful for debugging a single deployment.

There’s additionally an possibility to make use of the kubectl proxy and work together immediately with the native Kubernetes API, however it’s type of a trouble in comparison with the opposite choices under.

It is a good one, as docker-desktop makes use of vpnkit to reveal any load balancers and ahead site visitors into the cluster.

The manifest right here solely features a load balancer service and the deployment itself, no ingress included.

Right here’s the code:

kubectl apply -f manifest-docker.yaml

It makes the applying able to be curl‘ed.

I might say that is the go-to for debugging easy functions, working like a allure with skaffold dev. Extra on skaffold here.

Be aware: requires the gcloud to be configured with the precise venture and GKE enabled.

After together with the under ingress useful resource within the manifest, it may be used to provision a cluster on the GCP cloud fairly seamlessly. Right here’s the code:

Configure the kubectl to make use of the gcloud context:

gcloud container clusters create-auto [cluster-name] 
gcloud container clusters get-credentials [cluster-name]

After making use of the yaml, the load balancer shall be provisioned from GCP and can ahead any site visitors into the cluster.

Set up it with the next command:

helm improve --install ingress-nginx ingress-nginx 
--repo https://kubernetes.github.io/ingress-nginx
--namespace ingress-nginx
--create-namespace

By together with the identical ingress useful resource as in quantity three and including ingressClassName: nginx below spec (to outline which controller to make use of).

For minikube, there may be a further step of enabling an ingress because it doesn’t assist ingress out of the field. Right here’s how to do this:

minikube addons allow ingress
minikube tunnel

This technique allows exterior site visitors into the cluster and deployments with out GKE or EKS (Elastic Kubernetes Service from AWS). This manifest may be simply deployed on a single node cluster on a digital machine.

It allows one to learn from the Kubernetes superior options like auto-scaling and auto-healing. It doesn’t pressure one to make use of the AWS/GCP load balancing companies and cluster prices, which might pile up for small functions.

The NGINX ingress is load balancing, that means curl is distributed between the 5 pods (change replicas within the manifest-nginx.yaml to switch the pod quantity).

More Posts