Spring Boot — Continuous Deployment on Kubernetes With ArgoCD and GitHub Actions | by Zalán Tóth | Jun, 2022

Deployment made straightforward

These days DevOps, GitOps, Continuous Deployment are scorching subjects. Typically it looks as if magic however truly most elements of it are very easy and all people ought to undertake it. Automatized pipelines give us security and save plenty of time. All instruments we use on this article are free.

On this article, we’re gonna create a really primary pipeline utilizing GitHub Actions which will probably be triggered by all grasp push occasions. It’ll carry out mission testing, versioning and constructing. Usually we wish to run the exams on pull request open occasion so no untested code could be merged into the grasp however for the sake of simplicity I’ll omit it on this article.

We’re gonna setup ArgoCD in our Kubernetes cluster (I’m gonna use the one offered by Docker Desktop). ArgoCD will probably be monitoring our deployment GitHub repository and deploy each adjustments into the cluster utilizing DockerHub because the supply of the pictures.

Supply Pipeline

First, we’ve got to create two repositories on GitHub. The names are as much as you.

I’m gonna name them continuous-delivery-application and continuous-delivery-manifests respectively.

The primary one is a Spring Boot mission with a REST endpoint and a Unit Take a look at. It may be simply generated by Spring Initializr.

After unzipping the mission, open it in your favourite IDE (for instance IntelliJ).

Let’s create a REST endpoint that returns the record of Consumer objects. We’re gonna use Kotlin coroutines for the reactive endpoint.

The router emits two customers and serializes them into JSON format. You’ll be able to check the endpoint by calling the next URL:

http://localhost:8080/api/v1/user

Then add a easy unit check for the service:

Now we are able to add the primary CI step to our mission. Let’s create a brand new listing within the root folder of the mission.

The identify must be .github and on this folder, we’ve got to create one other listing known as workflows.

That is the obligatory naming conference for Actions outlined by GitHub.

Within the workflows folder, we create a brand new file known as push-to-master.yml. That is our deployment description manifest.

Each push on the grasp department will set off the workflow. The job makes the unit exams run. The write permission shouldn’t be crucial at this second however within the subsequent job.

This job runs on Ubuntu and in step one it checks out the mission from the repository.

After this, it installs JDK 17 and ultimately, it calls the Gradle’s check command for executing the unit check we outlined earlier.

Earlier than pushing the mission make gradlew executable utilizing the next command.

git update-index --chmod=+x gradlew

As we’re going to setup semantic-release which makes use of Conventional Commits, let’s begin all commit messages with ‘feat: ‘ following by the commit message. I’ll clarify it later.

After the mission was pushed into GitHub we are able to examine the pipeline on the repository’s Actions web page.

Profitable unit check

Subsequent we’re going to add the semantic launch plugin. This can robotically updates the mission model.

SemRel makes use of the Conventional Commits tags for figuring out the following model. We use feat which updates the minor model and repair for path replace.

There are lots of different tags, please examine the documentation.

Initially, we should create gradle.properties file within the root folder and add the next line:

model=0.0.1

And replace the model within the construct.gradle.kts file to:

model = mission.findProperty("model")!!

SemRel is a NodeJS plugin so we’ve got to create one other file known as package deal.json and add the mandatory semantic launch plugins into it.

The plugins will replace the model within the gradle.properties file and commit it again to the grasp together with the robotically generated changelog.

Earlier than we add the brand new job to the workflow let’s run the npm set up command.

This can generate package-lock.json file.

Prolong the push-to-merge.yaml manifest with the brand new job.

This job will take a look at the grasp department and runs the semantic-release step. The git plugin must entry our repository.

Actions robotically inject our token into the workflows so we are able to get it and add as an atmosphere variable utilizing the secrets and techniques.GITHUB_TOKEN template variable.

The wants construction ensures that the run_unit_test job runs earlier than this job begins. With out it these two would run concurrently.

Push the modified recordsdata and let the workflow runs (don’t forget so as to add the repair/feat and colon prefix for the commit message). When the pipeline has completed we are able to pull the grasp department and examine the gradle.properties file and the changelog.md as properly.

Now we’ve got two jobs run
Auto changelog era

We use Jib for constructing Docker container from the appliance.
Put the plugin and the configuration into the construct.gradle.kts file.

We use Amazon Coretto Java 17 base picture. For the picture identify prefix use your individual account identify.

For pushing it we’ve got to supply the DockerHub credentials. We are going to inject them as environmental variables utilizing GitHub Motion Secrets and techniques. The picture tag will get its worth from the mission model. This will probably be calculated by the semantic launch plugin.

The container exposes the 8080 port for visitors and 9000 for administration ports for liveness and readiness probes. The mainClass identify must be the absolutely certified identify of the category containing the primary perform and within the case of Kotlin, we’ve got so as to add Kt on the finish of it as Kotlin will generate the primary class on this identify. The jvmFlags helps us utilizing higher the container’s reminiscence.

Subsequent, we’ve got to switch the software.yml or software.properties file for enabling swish shutdown and well being probes on the administration port. This will probably be crucial on Kubernetes.

We are able to put the discharge job into the workflow file:

Semantic Launch step

As you’ll be able to see it’s just like the check job. Jib will create and push the picture into DockerHub however as I mentioned we had to supply our credentials. You are able to do it on the Settings tab.

Choose the Secrets and techniques/Actions from the left menu and use the New repository secret button.

After it’s carried out we are able to push the modifications. When the pipeline finishes the appropriately versioned picture must be on DockerHub.

The second mission holds the Kustomize manifests for Kubernetes deployment. Kustomize is a local configuration administration software for Kubernetes. Its goal just like Helm however template free and kubectl comprises it by default. It might probably group a bunch of sources and deploy them collectively.

Kustomize defines base templates and atmosphere patches. Let’s create two folders it this mission’s root. The primary one known as base and the second is overlays.

We’re gonna create a easy deployment for the appliance and a cluster ip for accessibility and cargo balancing.

Add deployment.yml to the bottom listing.

Deployment

This can create a Pod from the picture. We additionally set the well being probes and the useful resource limits.

We create node-port.yaml with the next content material so different functions throughout the cluster will probably be reachable by different Pods on this address and it additionally binds it to localhost on the 30001 port. In actual software you need to create ClusterIP and Ingress controller as an alternative.

Nodeport

Kustomize makes use of kustomization.yaml for its operations. We add the sources and likewise the bottom picture. We’re going to modify the newTag parameter from our pipeline on this means we are able to replace the picture model on each grasp pushes.

base/kustomization.yaml

Subsequent create a brand new listing known as manufacturing beneath the overlays folder and add one other kustomization.yaml file inside it. This will probably be the environment.
We received’t use any patches proper now however we add a standard label for each manifests and hyperlink the bottom kustomization on this file.

overlays/manufacturing/kustomization.yaml

Now we are able to push it into the second repository.

After the repository was pushed return to the pipeline file and outline the final job.

First, it checks out the appliance mission’s master department and get the precise model from the gradle.properties file. We set it as an atmosphere variable. GitHub Motion holds these within the $GITHUB_ENV variable.

Within the following step, it checks out the second mission. The pipeline wants our private entry token as default the Actions have permission just for the present repository. I’m gonna present how one can register within the subsequent part.

Within the final step, we use kustomize command for picture model replace after which commit and push it again to the repository.

After push (feat/repair) and the pipeline has completed the mission model must be set in each initiatives.

All steps are carried out

We are able to generate PAT beneath the profile settings (not mission settings). Choose the Developer Settings from the menu and click on on the Private Entry Tokens. You’ll be able to generate a brand new token utilizing the button.

As we have to push to this repository from the pipeline choose the repo checkbox and add a be aware.

Click on on the generate button on the underside of the web page.
Copy and save the token someplace as you can not retrieve it once more. Then create a new mission secret as we did earlier than utilizing this worth and the identify must be the identical we used within the pipeline step (PAT).

Setup ArgoCD

I suppose you has an put in k8s cluster (Docker Desktop with enabled Kubernetes is the simplest approach to obtain this)

For putting in ArgoCD let’s run the next instructions. The primary one crates a brand new namespace the second deploys Argo.

kubectl create namespace argocdkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

I’m not gonna create Ingress controller now so simply use the next command to ahead Argo UI from the cluster to your native machine.

kubectl port-forward svc/argocd-server -n argocd 8011:443

Go to this URL and login. The default username is admin.
To retrieve the default password use the next command within the terminal:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=".knowledge.password" | base64 -d

Earlier than we create the appliance we’ve got so as to add our repository to Argo. Let’s click on on the settings (cog) button and select the repository menu merchandise.
I’m gonna choose the join repo utilizing ssh possibility however you’ll be able to login to the repo via https or app.

Add the identify, repository url and your personal ssh key then click on on the join button.

Join Repository

Return to the house web page and let’s click on on the new app button.

Fill within the software identify discipline, you should use any string there. The mission is the default.

Choose computerized as sync coverage and add the git repository’s URL. The trail is overlays/manufacturing.

Subsequent, we’ve got to set the cluster URL which is the native cluster’s deal with. Within the case of Docker Desktop, it’s http://kubernetes.default.svc and we’re gonna deploy the appliance into the default namespace.

Click on on the create button.

On the house web page, you will note the appliance. Wait till it will likely be wholesome.

When the appliance is up and operating open this URL and examine the end result.

Clicking on the Software’s title we are able to examine the state of the deployment.

Software’s state

Now let’s change the names within the software mission and push it again to grasp department. After the construct has been completed ArgoCD will decide up the adjustments from the repository and deploy a brand new software model.

Typically it takes as much as 5 minutes relying on the construct time so be affected person (the default sync interval is 3 minutes).

Deploy new model robotically

There are lots of steps we are able to add to this pipeline like static code evaluation, importing check protection to Codecov, and many others.

The supply code is offered on GitHub:

More Posts