On this demo, we arrange a Node.js Specific server in an Amazon EKS Kubernetes cluster, and serve it via CloudFront
Kubernetes is a well-established container orchestration framework that allows open-source deployment, scaling, and administration of containerized purposes. Whereas it might be too strong for the only of purposes, Kubernetes does current an impressive commonplace of uptime and reliability, eg. by enabling clean, rolling updates to purposes.
Elastic Kubernetes Service (EKS) is a service managed by AWS, which takes a few of the notorious complexity out of managing a Kubernetes deployment. CloudFront, in flip, can be utilized to cache the responses popping out of the deployment in order to maintain computation prices at bay.
On this demo, we’ll arrange a easy Specific server in a Kubernetes cluster, and serve it via CloudFront. Earlier than we begin, our native machine ought to have the next tooling put in:
We additionally assume we’ve got management over a site title, which the applying could be served from. The demo software and the related configuration recordsdata could be discovered right here: https://github.com/alexcolb/eks-nodejs-demo
Making ready the Docker Picture
Kubernetes is a container orchestration framework, which implies that we have to create and host a Docker picture for our app. We’ll begin by creating a public repository on Docker Hub (eg.
$ docker login
$ docker construct -t my-docker-username/my-app .
$ docker push my-docker-username/my-app
The docker picture we created is now publicly out there in Docker Hub: https://hub.docker.com/repository/docker/my-docker-username/my-app
Creating Kubernetes’ ConfigMap YAML configurations
We subsequent create, get accustomed to and modify a minimum of the TODO-annotated values within the following three YAML recordsdata, which inform Kubernetes what we wish our cluster to appear like.
This can be a longer, much less commonplace kind of ConfigMap, so simply copy it from the offered git repository. Nevertheless, be certain that to edit the area in
--domain-filter to match your software’s area. The importance of this file will change into extra clear in a while.
Creating our EKS cluster
Earlier than persevering with, we need to have our AWS credentials configured. Then, to create the cluster, we wait some time for the next command to finish:
$ eksctl create cluster --name my-cluster --region eu-west-1 --nodegroup-name linux-nodes --node-type t2.small --nodes 1
We will then change our namespace and apply two of our YAML configurations as follows. Amongst different issues, this may deploy our Docker picture onto the cluster.
$ kubectl config set-context --current --namespace=kube-system
$ kubectl apply -f api.deployment.yaml
$ kubectl apply -f api.service.yaml
$ kubectl get pods --watch
The final command lets us observe our two pods being created and hopefully ending up within the Working state. Ought to that you must debug a failing pod, these instructions will show helpful:
$ kubectl describe pod/my-pod-name
$ kubectl logs pod/my-pod-name
In case you observe an “exec format error” and your machine is working on Apple Silicon, you might have to create your Docker photographs elsewhere.
To verify our progress to this point, we are able to go to the ephemeral URI that our cluster has opened as much as the world. In different phrases, after DNS data has taken a while to propagate, we are able to observe our deployed API in our browser! To that finish, let’s use this command to get our EXTERNAL-IP and PORT:
$ kubectl get service
Configuring DNS entry to the cluster
The issue we now face is that this URI will change each time our service is up to date, so we are able to’t use it as-is for inbound visitors. As an alternative, we’ll leverage external-dns to make our service discoverable by public DNS.
First, we create a brand new hosted zone in AWS Route 53 named
external-dns.my-domain.com, being attentive to its ID.
We then arrange a service account authorising our cluster to replace the ephemeral URI of our software to DNS. First, we create the next JSON coverage in AWS IAM, being attentive to its ARN:
"Useful resource": [
"Useful resource": [
We’re cautious to not give Kubernetes entry to the production-facing hosted zone. Even when we resolve to make use of a subdomain of our precise area, we should route the external-dns traffic to a standalone hosted zone.
We will then create the service account, and confirm its attachment to our cluster:
$ eksctl utils associate-iam-oidc-provider --region=eu-west-1 --cluster=my-cluster --approve$ eksctl create iamserviceaccount
--approve$ kubectl describe sa external-dns
Lastly, we deploy the external-dns Kubernetes pod, which in flip will dynamically replace the Route 53 information to level to our ephemeral URL:
$ kubectl apply -f api.external-dns.yaml
We will confirm this by visiting our new static tackle, eg.
Pointing CloudFront to EKS
We now wish to put our cluster behind a load balancer, in order that API responses received’t must be re-computed each time they’re requested. In AWS CloudFront, we create a new distribution. The Origin area needs to be no matter report external-dns saved in Route 53, eg.
my-service.external-dns.my-domain.com, and the worth of HTTP port needs to be no matter was configured within the ConfigMaps, eg.
8080. We’ll additionally must create a customized Cache Coverage with Question Strings set to
All, which can inform CloudFront to incorporate HTTP question strings when contemplating the caching of our endpoints.
As soon as deployed, we are able to entry the applying by way of the URI listed below Distribution area title within the CloudFront distribution!