Building and Deploying a Simple App to Kubernetes Using “werf” | by Konstantin Nezhbert | Apr, 2022

Know learn how to use the open supply device

Photograph by Michael on Unsplash

This text seems at constructing a Docker picture of a minimalistic utility and deploying it to a Kubernetes cluster utilizing the Open Supply device referred to as werf. I can even present how one can additional ship your adjustments in your app’s code and the infrastructure the place it’s run.

I’ll use a small echo server primarily based on a shell script for instance utility. This server returns the string Hey, werfer! in response to the /ping endpoint request.

NB. You may discover all information of this minimalistic utility and obtain them from this repository.

The Kubernetes cluster we’ll use for this text relies on minikube, so that you don’t want any particular {hardware} to observe the directions: your common desktop/laptop computer will match.

For these new to this CLI utility, werf implements the entire utility supply workflow in Kubernetes. It makes use of Git as a single supply of utility code and configuration:

  • Every commit displays a selected utility state;
  • werf synchronizes it with the container registry (by constructing the lacking layers of ultimate Docker photos) and the applying run in Kubernetes (by re-deploying the assets which were modified);
  • werf additionally cleans up out of date artifacts within the container registry utilizing a singular algorithm primarily based on Git historical past and user-defined insurance policies.

The distinctive function of werf is the combination of many well-known instruments for builders and DevOps/SRE engineers, similar to Git, Docker, container registry, CI system, Helm, and Kubernetes. These parts are mixed to make sure the opinionated CI/CD workflow to ship your apps to Kubernetes. Bringing them collectively minimizes the efforts wanted to implement CI/CD.

Let’s see it in motion.

Earlier than beginning, set up the newest secure model of werf (v1.2 from the secure launch channel) in your system (seek advice from the official documentation).

All instructions and actions given within the article apply to the Linux working system (examined on Ubuntu 20.04.03). Whereas the instructions are typically the identical for different techniques similar to Home windows and macOS, slight variations could exist. Should you wrestle with any particular directions in your OS, please verify the hyperlinks on the finish of this text.

First, we’ve to create the applying itself. Let’s create a working listing (in our case, it’s the app listing within the consumer’s dwelling listing):

mkdir app

Create a hey.sh script within the listing with the next contents:

hey.sh

Initialize a brand new Git repository within the listing and commit the primary adjustments — the script we’ve simply created:

cd ~/app
git init
git add .
git commit -m preliminary

Since our utility can be constructed and run in Docker, let’s additionally create a Dockerfile with directions for constructing an utility picture:

Dockerfile

For werf to make use of the Dockerfile for constructing, we have to create a werf.yaml configuration file within the undertaking root with the Dockerfile description:

werf.yaml

A repository with the information created up thus far is out there in this directory of the werf/first-steps-example repository.

Now, we’re able to construct our utility. Notice that it is advisable to commit all of the adjustments to the undertaking repository (the Dockerfile, and so on.) earlier than constructing, i.e. run the next instructions first:

git add .
git commit -m FIRST

Begin the construct utilizing the command under:

werf construct

It’s best to see the next output:

werf construct output

To verify if the construct was profitable, run the applying with:

werf run app --docker-options="-ti --rm -p 8000:8000" -- /app/hey.sh

Let’s take a more in-depth have a look at the above command. The —-docker-optionspossibility specifies a set of Docker-related parameters, whereas the command to execute within the container (on the finish) is preceded by two hyphens.

Let’s verify that every part is up and operating as supposed. To do that, go to http://127.0.0.1:8000/ping in your browser or use the next CURL request in one other terminal:

curl http://127.0.0.1:8000/ping

It’s best to see the “Hey, werfer!” greeting. As well as, the next message ought to seem within the logs of the operating container:

GET /ping HTTP/1.1
Host: 127.0.0.1:8000
Consumer-Agent: curl/7.68.0
Settle for: */*

Constructing an app is half the issue (or perhaps a third). In any case, you continue to should deploy it to manufacturing servers. To do that, let’s create a neighborhood “manufacturing” Kubernetes cluster and configure werf to make use of it. Right here’s a listing of steps to take:

  • set up and run minikube, a minimal Kubernetes distribution (it’s preferrred for testing functions);
  • set up the NGINX Ingress Controller, a cluster element accountable for site visitors routing;
  • edit the /and so on/hosts file to allow cluster entry utilizing the applying area title;
  • log in to the Docker Hub and arrange the key with the required credentials;
  • deploy the applying to Kubernetes.

1. Putting in and operating minikube

First, set up minikube as described within the official documentation. If you have already got it put in, guarantee that your model is the newest one.

Let’s hearth up a Kubernetes cluster utilizing minikube:

# Delete the prevailing minikube cluster (if there's one).
minikube delete
# Begin a brand new minikube cluster.
minikube begin --driver=docker

Set the default Kubernetes namespace so that you simply don’t should enter it every time you utilize kubectl (observe that we solely configure the default title and never create the namespace itself — we’ll try this later):

kubectl config set-context minikube --namespace=werf-first-app

Should you do not need kubectl put in, there are two methods to put in it:

  • Set up it manually utilizing the official documentation;
  • Use the kubectl binary that comes with minikube. To do that, run the next instructions:
alias kubectl="minikube kubectl --"
echo 'alias kubectl="minikube kubectl --"' >> ~/.bash_aliases

Should you select the second possibility, the utility can be downloaded and put in the primary time you invoke kubectl utilizing the alias above.

Let’s verify if kubectl works by itemizing all of the Pods operating within the newly created cluster:

kubectl get --all-namespaces pod

A Pod is an ephemeral Kubernetes entity that hosts a number of utility containers and assets shared between these containers.

Working this command ought to produce an output just like the one under:

kubectl get –all-namespaces pod

Look intently on the READY and STATUScolumns. If all Pods have a Working standing and the numbers within the Prepared column are 1/1 (observe that the quantity on the left should be equal to the quantity on the appropriate), then our cluster is able to use. If you don’t see the output just like the one above, attempt ready slightly longer and rerunning the above command (most likely, some Pods haven’t had time to start out).

2. Putting in NGINX Ingress Controller

The following step is to put in and configure the NGINX Ingress controller. It’ll route exterior HTTP requests to our cluster.

Use the next command to put in it:

minikube addons allow ingress

This course of can take a while, relying in your PC’s efficiency. For instance, it took my machine about 4 minutes to put in this add-on.

As soon as the method is full, it’s best to see the next success message:

The 'ingress' addon is enabled

Look forward to the add-on to start out and verify if it really works:

kubectl -n ingress-nginx get pod

It’s best to see the output just like the one under:

kubectl -n ingress-nginx get pod

The final line is what pursuits us. The Working standing signifies that every part is okay, and the controller is operating.

3. Edit the hosts file

The final step in establishing the atmosphere is to edit the hosts file so that every one requests to the take a look at area find yourself within the native cluster.

In our case, we’ll use the werf-first-app.take a look at tackle. Run the minikube ip command within the terminal to see if it outputs a legitimate IP tackle. If the output does not appear to be a legitimate IP tackle (192.168.49.2 in my case), return and reinstall the minikube cluster.

Subsequent, run the next command:

echo "$(minikube ip) werf-first-app.take a look at" | sudo tee -a /and so on/hosts

You may verify whether or not the above command was profitable by viewing the hosts file. There must be a line like this: 192.168.49.2 werf-first-app.take a look at.

Now, let’s see if every part works as anticipated. To do that, we’ll ship a CURL request to the applying endpoint:

curl http://werf-first-app.take a look at/ping

On this case, the NGINX Ingress Controller ought to return a 404 web page indicating that the endpoint shouldn’t be but accessible:

404 web page

4. Logging in to Docker Hub

Now, we have to arrange a repository for the constructed photos. We recommend utilizing a non-public Docker Hub repository. For comfort, we’ll use the applying title (werf-first-app) because the repository title.

Log in to Docker Hub by operating the next command:

docker login
Username: <DOCKER HUB USERNAME>
Password: <DOCKER HUB PASSWORD>

It’s best to see the Login Succeeded message.

5. Making a Secret for registry entry

To make use of the personal registry to retailer photos, you could create a Secret with registry login credentials. Notice that the Secret should be situated in the identical namespace as the applying.

Subsequently, it is advisable to create a namespace for the applying beforehand:

kubectl create namespace werf-first-app

It’s best to see the message that the brand new namespace has been created (namespace/werf-first-app created).

Subsequent, create a Secret named registrysecret:

kubectl create secret docker-registry registrysecret 
--docker-server='https://index.docker.io/v1/'
--docker-username='<DOCKER HUB USERNAME>'
--docker-password='<DOCKER HUB PASSWORD>'

If profitable, it’s best to see a message secret/registrysecret created. Should you made a mistake when creating the key, delete it with the kubectl delete secret registrysecret command and recreate it.

Notice that the strategy described above is an ordinary strategy to create Secrets and techniques in Kubernetes.

This concludes the preparation of the atmosphere for deploying the applying to the cluster.

We are going to use the Secret created above to drag utility photos from the registry by specifying it within the imagePullSecrets area when establishing Pods.

Earlier than deploying the applying, we’ve to create Kubernetes manifests that outline the assets we want. We are going to use the Helm chart format for this function. Helm charts (or Helm packages) include all of the useful resource definitions required for operating an utility or service in a Kubernetes cluster.

We’ll want three K8s assets for our utility. Whereas Deployment is accountable for operating the app in containers, Ingress and Service route exterior and inner site visitors within the cluster, respectively.

We find yourself with the next file construction:

Construction of information

We are going to put the manifests talked about above within the templates subdirectory of the hidden .helm listing.

Notice: you need to add the listing with manifests to the .dockerignore file to exclude these information from the context of the Docker picture construct:

/.helm/

Let’s take a more in-depth have a look at our useful resource manifests.

1. Deployment

The Deployment useful resource creates a set of Pods for operating the applying. It seems like this:

Deployment

Right here, the .Values.werf.picture.app template variable is used to insert the total title of the applying Docker picture. Notice that you could use the identical element title that was utilized in werf.yaml (app in our case).

werf routinely inserts the total names of the photographs to be constructed and different service values to Helm chart values (.Values). You may entry them utilizing the werf key.

werf solely rebuilds photos when the added information are modified (these used within the Dockerfile COPY/ADD directions) or if werf.yaml itself is modified. A rebuild causes the picture tag to alter, which routinely results in the Deployment replace. If there are not any adjustments to those information, the applying picture and its related Deployment will stay unchanged, which means that the applying’s state within the cluster is updated.

2. Service

The Service useful resource permits different purposes in the cluster to hook up with our utility. It seems like this:

Service

3. Ingress

Not like the earlier useful resource, Service Ingress opens up entry to our utility from exterior the cluster. Its function is to redirect site visitors to the werf-first-app.take a look at public area to our Kubernetes Service. It seems like this:

Deploying the app

Let’s commit our configuration adjustments (the K8s assets required to deploy the applying) to Git:

git add .
git commit -m FIRST

A repository with the information created up thus far is out there in this directory of the werf/first-steps-example repository.

Begin the deployment course of with the next command:

werf converge --repo <DOCKER HUB USERNAME>/werf-first-app

Let’s see if the method was profitable:

werf converge output

Run once more:

curl http://werf-first-app.test/ping

It’s best to see the next response:

Hey, werfer!

Congratulations, you may have efficiently deployed the applying to the Kubernetes cluster!

Let’s attempt to modify our utility and see how werf rebuilds and re-deploys it into the cluster.

Scaling

Our web-server runs as a part of the web-first-app Deployment. Let’s see what number of replicas are operating:

Replicas of app

At the moment, we’ve only one operating reproduction (the one which begins with werf-first-app). Improve their quantity to 4:

kubectl edit deployment werf-first-app

A textual content editor will open with the contents of the manifest file. Discover the spec.replicas line and alter the variety of replicas to 4: spec.replicas=4. Wait a bit and verify the variety of operating app replicas:

4 replicas of app

On this case, we’ve manually elevated the variety of replicas within the cluster by immediately enhancing the manifest and bypassing Git. Now, run the werf converge command:

werf converge --repo <DOCKER HUB USERNAME>/werf-first-app

Test the variety of replicas as soon as once more:

Variety of replicas once more

As you’ll be able to see, the variety of operating replicas corresponds to the one specified within the manifest saved in Git (we didn’t edit it). It’s because werf has reverted the cluster state again to the one described within the present Git commit. This mechanism known as Giterminism (Git + determinism).

To respect this precept and do every part appropriately, it is advisable to change the variety of replicas within the undertaking information within the repository. So, let’s edit the deployment.yaml file and commit the adjustments to the repository:

Deployment

Commit the adjustments and rebuild the applying utilizing the next command:

werf converge —-repo <DOCKER HUB USERNAME>/werf-first-app

Now, let’s verify the variety of replicas once more:

4 replicas of app

As you’ll be able to see, there are 4 replicas. Let’s lower their quantity again to at least one. For this, edit the deployment.yaml file, commit the adjustments, and redeploy the applying by way of the werf converge command.

Altering the code

At the moment, our utility responds with Hey, werfer! Let’s change the reply and redeploy the up to date utility to the cluster. Open hey.sh within the editor and change the prevailing line with one thing else (e.g., Say hey yet another time!):

New hey.sh

Now, commit the adjustments and run werf converge. What can we find yourself with?

curl http://werf-first-app.test/ping
Say hey yet another time!

Congratulations, every part is okay and runs as anticipated!

On this article, we constructed and deployed the fundamental utility to the Kubernetes cluster utilizing werf. I hope it should enable you to get acquainted with werf and acquire some expertise with deploying purposes to K8s.

The article relies on the online self-study guide’s First steps chapter. Making it as concise as attainable, I selected to not dive into theoretical points lined within the full information, similar to Kubernetes templates and manifests, important K8s assets for operating purposes (Deployment, Service, Ingress), werf working modes and Giterminism, peculiarities of utilizing Helm in werf, and so on. You may be taught extra about them within the abovementioned information. Extra particular directions, together with these for different working techniques, are additionally accessible there.

Any questions and solutions are welcome within the feedback to the article or within the werf_io Telegram chat.

  • werf.io — Official web site of the werf utility;
  • Giterminism — About giterminism, the precept utilized by the utility;
  • GitHub — Supply code repository.

More Posts