Michal Zalecki
Michal Zalecki
software development, testing, JavaScript,
TypeScript, Node.js, React, and other stuff

Just Enough Kubernetes for JavaScript Developers

Kubernetes (k8s) is a platform for deploying, scaling, and managing containers. Kubernetes may be daunting to pick up in the beginning, and you probably already found out about it the hard way. I learn the most by doing, and in this tutorial, I set up a Kubernetes cluster that is very close to something you would run on production.

The project consists of a frontend application, the backend that connects to Redis, which is accessible only from within the cluster. Both frontend and backend enforce HTTPS connection and have SSL certificates issued by Let's Encrypt that automatically renew. Additionally, backend connects to the managed Postgres database. I use a private (and free) GitLab container registry.

I try to keep the costs of running personal projects relatively low, and I often use DigitalOcean. This tutorial I use DigitalOcean for its managed Kubernetes service and Postgres database. Using this referral link to sign up gets you $50 in credit that's enough to run your cluster, load balancer, and database for about a month without paying anything. Full disclosure, I get $25 if you spend $25 on top of free credits.

If you want to try another platform, then consider signing up for Google Cloud Platform. GCP has a generous $300 free trial and solid support for Kubernetes with a comprehensive dashboard.

Note on software: I use macOS, but the majority of commands in this tutorial should work the same on Linux. Software changes, the process of installing software between platforms is different. Always try to use the most up-to-date version and in case of any problems take a look at the appropriate documentation: kubectl, gcloud, doctl, helm.

Tabel of Contents

Install kubectl

The prerequisite is to have a locally installed kubectl. Despite some backward compatibility, it is the best to use a version that corresponds to the version of Kubernetes running your cluster.

At the time of writing the latest Kubernetes version available on DigitalOcean is v1.14.4 but before you install one on your machine, make sure to update the version in the following code snippet.

Example of how to install kubectl v1.14.4:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Install doctl

Along with the kubectl for managing Kubernetes cluster, you use additional, vendor-specific tools that allow for easy access to platform features or configuring a connection to the cluster. For example, Google Cloud Platform has a command-line tool called gcloud. DigitalOcean also has one; it's called doctl.

Get doctl using brew:

brew install doctl
doctl auth init

Cluster Setup

You can create a new cluster from DigitalOcean Dashboard. Go to Kubernetes and then click Create Kubernetes cluster. Select the latest version of Kubernetes. Choose the region closest to your users or just the closest to you.

Nodes are the machines on which Kubernetes allocates instances of your applications (pods with running containers). Nodes selection is essential not only from the performance perspective but also acts as a fail-safe mechanism as the failure of one node won't take your application down. My personal preference is to go with flexible nodes with 2 GB memory and 2 vCPUs. In general, having 3 nodes is recommended, but you can also go with just 2 nodes to keep your cost low as this is only a tutorial and not a critical application.

Create a new cluster using doctl:

doctl kubernetes cluster create <CLUSTER_NAME> --count <NODES_COUNT> --size <DROPLET_TYPE> --region <REGION>
doctl kubernetes cluster create k8s-tutorial --count 2 --size s-2vcpu-2gb --region fra1

If you used doctl to create the cluster, doctl already downloaded cluster configuration and set kubectl context, so you don't have to do it.

Use doctl to download, and merge cluster configuration:

doctl kubernetes cluster kubeconfig save <CLUSTER_NAME>
doctl kubernetes cluster kubeconfig save k8s-tutorial

This command also changed the current kubectl context.

List available kubectl contexts:

kubectl config get-contexts

There's an asterisk next to the current context. You can make sure you are connected by checking the nodes available in the cluster:

kubectl get nodes

You should now see the list of nodes which you specified while creating the cluster.

Backend Service

I start by setting up the API part of the project. The code is ready on GitHub. You can clone the repository and checkout to particular commit or try to go on your own and only review changes while the tutorial progresses.

git clone https://github.com/MichalZalecki/kubernetes-for-developers k8s-tutorial
cd k8s-tutorial
git checkout ecced3a

Our API is a very typical example of an Express application. The API service exposes /ping endpoint that responds with "pong" text. By default, it starts on port 3000. Provided Dockerfile is also nothing fancy. It installs required dependencies and starts the server.

Try now building and running the API application locally. Check whether http://localhost:3000/ping responds with "pong".

cd apps/api
docker build -t k8s-tutorial/api:v1 .
docker run --rm -ti -p 3000:3000 --name k8s-tutorial-api k8s-tutorial/api:v1
curl http://localhost:3000/ping

Exit the remote connection.

Now we have to think about storing the API service image in the registry. Go to GitLab and sing in. Create a new project. I called mine k8s-tutorial. Make sure your project is private as in real-world project, you wouldn't let any stranger to pull your images.

Login to GitLab registry and push the previously created image:

docker login registry.gitlab.com
docker tag k8s-tutorial/api:v1 registry.gitlab.com/michalzalecki/k8s-tutorial/api:v1
docker push registry.gitlab.com/michalzalecki/k8s-tutorial/api:v1

Now we are set to start working on deploying the application.

git checkout 09509af

Under deploy/api.yaml you can find a configuration of Kubernetes Deployment object.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
  labels:
    app: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: registry.gitlab.com/michalzalecki/k8s-tutorial/api:v1
        ports:
        - containerPort: 3000

The configuration specifies the name of the deployment, number of instances we would like to run at the same time, the name of the image, path to the image, and a port.

Apply the configuration:

kubectl apply -f deploy

This will create a new Deployment. Run this command each time I mention applying changes to selected resources.

Check whether the deployment is created and what are the pods:

kubectl get deployments
kubectl get pods
kubectl describe pods

I can see that my pods are failing to pull the image. It's because the registry is private. I have to authorize my cluster to pull images from a private repository on GitLab.

Create a regcred secret:

kubectl create secret docker-registry regcred --docker-server="registry.gitlab.com" --docker-username="MichalZalecki" --docker-password="MY_PASSWORD" --docker-email="MY_EMAIL"
kubectl get secrets

There's a safer way to create the secret that doesn't require typing password into the command line, but I couldn't get it to work with GitLab. Maybe you will have more luck. Anyway, let's move on.

git checkout 2f2f2de

Now, when I set regcred secret, I have to instruct Kubernetes to use it to pull images.

spec:
  containers:
  - name: api
    image: registry.gitlab.com/michalzalecki/kubernetes-for-developers/api:v1
    ports:
    - containerPort: 3000
  imagePullSecrets:  - name: regcred

Apply the changes to the API deployment.

After Kubernetes applied the changes, you can check the log of a Pod, and you should see that the server has started on port 3000 (name of your pod will have a different suffix).

kubectl logs POD_NAME
kubectl logs api-deployment-c76d7c7d5-fv4sf

kubectl exec POD_NAME curl http://localhost:3000/ping
kubectl exec api-deployment-c76d7c7d5-fv4sf curl http://localhost:3000/ping

Pods are running, but I cannot reach them over the internet just yet. Services in Kubernetes are types of objects that allow exposing a set of Pods as a network interface. This way, all instances of the API application are visible as a single service to the outside world.

There are different types of Services available for us to use. In my setups, I use Ingress that acts as an Nginx reverse proxy which manages external access to the services in the cluster and routes the traffic by a domain name. A Service that sits between a set of Pods and Ingress in my setup is a default ClusterIP. ClusterIP allows addressing application by its private IP that is reachable only by other objects in the cluster.

git checkout 99a42e5

At the beginning of deploy/api.yaml I define the service which takes container's port 3000 and exposes it on port 80. The file in which Service is defined doesn't matter. Service knows to which Pods it should redirect the traffic by the selector. Keeping Deployment and Service defiitions in a single file is just a convention that I like as for me it makes easier to keep labels in sync.

apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  ports:
  - port: 80
    targetPort: 3000
  selector:
    app: api

Kubernetes DNS is an internal DNS that resolves a service name to a service IP, and so you can refer to service through HTTP using a service name as a domain name. You can check Kubernetes DNS is running.

kubectl get services kube-dns --namespace=kube-system

Create a new pod to check the resolution of services names. You can do it using nslookup or call curl from the newly created pod.

kubectl run curl --image=radial/busyboxplus:curl -i --tty

nslookup api-service
curl http://api-service/ping

kubectl delete deployment curl

Remove the curl deployment after you make sure that service name was properly resolved. An alternative to Kubernetes DNS would be to use environmental variables (try printenv in the curl Pod), but this is error-prone as it depends on the order in which Services/Pods were created.

Nginx Ingress

Start by installing Nginx Ingress in your cluster. The following commands also install a new Load Balancer you can find in the DigitalOcean dashboard under Networking > Load Balancers. Load Balancers cost you additionally, but you need only one, right now it's $10/mo on DigitialOcean.

Installing Nginx Ingress in the cluster:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/provider/cloud-generic.yaml
kubectl get svc --namespace=ingress-nginx

Wait for Load Balancer to obtain an external IP address. You can now point your domain and subdomains to the new IP address. You can check the propagation on whatsmydns.net.

In the meantime, take a look at the configuration file: apps-ingress.yaml.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apps-ingress
spec:
  rules:
  - host: api.mydevops.ovh
    http:
      paths:
      - backend:
          serviceName: api-service
          servicePort: 80

You can instruct the Ingress on how to resolve domains to particular services. For now, the one we have is API service. Apply changes.

After DNS records propagated, execute the following command on your local machine to see whether everything works.

curl http://api.mydevops.ovh/ping

Congratulations! You made your first service available to the outside world!

Enable HTTPS

It's time to secure connection with SSL. I'm going to use Let's Encrypt via a cert manager to obtain a free SSL certificate. The cert manager automatically renews certificates when they are about to expire.

git checkout 569e9c

First, install Helm on your machine. It's needed to install a cert manager.

brew install kubernetes-helm

If you need a specific helm version (here 2.10.0 for macOS) you can download it directly using curl.

curl -O https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-darwin-amd64.tar.gz
tar -zxvf helm-v2.10.0-darwin-amd64.tar.gz
mv helm /usr/local/bin/helm

Create serviceaccount bind to the cluster-admin role for Tiller which is Helm's server-side component.

kubectl create serviceaccount tiller -n kube-system
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Install Tiller:

helm init --service-account tiller

After you install a Tiller, verify if it's running.

kubectl get pods --namespace kube-system | grep tiller-deploy-

Now, follow the instruction from cert-manager docs.

# Install the CustomResourceDefinition resources separately
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install --name cert-manager  --namespace cert-manager --version v0.8.1 jetstack/cert-manager

There are a few moving parts here, and you can test them using a staging certificate issuer. Check out the new deploy/letsencrypt-issuer.yaml configuration file which defines staging issuer.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: [email protected]
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-staging
    http01: {}

Before applying changes, I have to add annotations for Ingress to use a staging issuer.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apps-ingress
  annotations:    kubernetes.io/ingress.class: nginx    certmanager.k8s.io/cluster-issuer: letsencrypt-stagingspec:
  tls:  - hosts:    - api.mydevops.ovh    secretName: letsencrypt-staging  rules:
  - host: api.mydevops.ovh
    http:
      paths:
      - backend:
          serviceName: api-service
          servicePort: 80

This time, the order in which you apply changes is important due to the Ingress dependency on letsencrypt-staging secret.

kubectl apply -f letsencrypt-issuer.yaml
kubectl apply -f apps-ingress.yaml

Check the details about Ingress and the certificate:

kubectl describe ingress
kubectl describe certificate letsencrypt-staging

Let's configure the production issuer.

git checkout 59c1baf

Production issuer configuration is different only by the name and server URL.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: [email protected]
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    http01: {}

Reference the production issuer in the Ingress configuration.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apps-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-prodspec:
  tls:
  - hosts:
    - api.mydevops.ovh
    secretName: letsencrypt-prod  rules:
  - host: api.mydevops.ovh
    http:
      paths:
      - backend:
          serviceName: api-service
          servicePort: 80

Apply changes in the right order:

kubectl apply -f letsencrypt-issuer.yaml
kubectl apply -f apps-ingress.yaml

Check the result:

kubectl describe ingress
kubectl describe certificate letsencrypt-prod

curl https://api.mydevops.ovh/ping

Congratulations! You can now serve content to your users securely!

Frontend Service

With the current setup, adding more applications to the cluster is very straightforward.

git checkout 701e2ce

Start with building and pushing the image.

docker build -t registry.gitlab.com/michalzalecki/k8s-tutorial/app:v1 .
docker push registry.gitlab.com/michalzalecki/k8s-tutorial/app:v1

Service and deployment configuration is analogous to API application.

apiVersion: v1
kind: Service
metadata:
  name: app-service
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: app

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  labels:
    app: app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: registry.gitlab.com/michalzalecki/k8s-tutorial/app:v1
        ports:
        - containerPort: 8080
      imagePullSecrets:
      - name: regcred

I have to add a new service to Ingress configuration.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apps-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - mydevops.ovh    - api.mydevops.ovh
    secretName: letsencrypt-prod
  rules:
  - host: mydevops.ovh    http:      paths:      - backend:          serviceName: app-service          servicePort: 80   - host: api.mydevops.ovh
    http:
      paths:
      - backend:
          serviceName: api-service
          servicePort: 80

Apply changes and check the result.

curl https://mydevops.ovh

Redis

Spinning up a database (even in-memory store) in the container might not be the best idea. First of all, it's too easy to lose data due to silly mistakes. Moreover, it's easier to go about things when your cluster is stateless. That said if your use case isn't critical, you might want to save some bucks and pass on using managed solutions. The cool thing about it is that Redis makes for a valid case of having a service available only from within the cluster.

git checkout cd3f724

The API service has changes to use Redis to implement rate-limiting. Build and push a new image to the registry.

docker build -t registry.gitlab.com/michalzalecki/k8s-tutorial/api:v2 .
docker push registry.gitlab.com/michalzalecki/k8s-tutorial/api:v2

Redis service configuration is similar to previously defined frontend and backend services. Redis deployment is using a Redis image available on Docker Hub.

apiVersion: v1
kind: Service
metadata:
  name: redis-service
spec:
  ports:
  - port: 6379
  selector:
    app: redis

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  labels:
    app: redis
spec:
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:5.0.5
        ports:
        - containerPort: 6379

Set the URL that API service can use to connect to Redis. URL takes advantage of Kubernetes DNS service names resolution.

spec:
  containers:
  - name: api
    image: registry.gitlab.com/michalzalecki/kubernetes-for-developers/api:v2
    ports:
    - containerPort: 3000
    env:    - name: REDIS_URL      value: redis://redis-service:6379   imagePullSecrets:
  - name: regcred

Apply changes and check the result.

curl https://api.mydevops.ovh/countries -H "authorization:Bearer my_token123"
curl https://api.mydevops.ovh/countries -H "authorization:Bearer my_token123"
{"error":"API limit reached (10req/60sec)"}

Rate limiting should now work.

Postgres

I use a managed solution for Postgres from DigitalOcean. In the DigitalOcean Dashboard go to Databases > PostgresSQL Create a Cluster. Select the newest PostgreSQL version and region to match your cluster. Restrict inbound connections to your Kubernetes cluster - you can select it from the list during the configuration of your database cluster. Optionally allow connection from your local machine to make setup and testing easier.

git checkout 0732c86

To connect to the database, I use a private network option and connection string I can copy from the DigitalOcean dashboard. Create a pg_uri text file and paste the connection string there. Connection string ends with ?sslmode=require, to use it with the node-postgres package change it to ?ssl=true. Now, create a secret pgcred from a file.

kubectl create secret generic pgcred --from-file=uri=./pg_uri

Before you move forward, build, and push a new image for the API service.

docker build -t registry.gitlab.com/michalzalecki/k8s-tutorial/api:v3 .
docker push registry.gitlab.com/michalzalecki/k8s-tutorial/api:v3

You can use a newly created pgcred secret to create an environmental variable and pass it to the API service container.

spec:
  containers:
  - name: api
    image: registry.gitlab.com/michalzalecki/kubernetes-for-developers/api:v3    ports:
    - containerPort: 3000
    env:
    - name: REDIS_URI
      value: redis://redis-service:6379
    - name: PG_URI      valueFrom:        secretKeyRef:          name: pgcred          key: uri

Before you deploy, run a migration.

cd apps/api
PG_URI=<PASTE_PUBLIC_PG_URI> node migrate.js

At this point, you can remove your local machine from the trusted sources list for connecting to the database. Apply changes.

Test whether countries are still listed (they came from the database).

curl https://api.mydevops.ovh/countries -H "authorization:Bearer my_token123"

Awesome! Now, let's get the most out of Kubernetes and make sure your cluster is enabled to perform zero-downtime deployments and heals itself after the application crashes.

Liveness and Readiness Probes

Using liveness probe kubelet (node agent) knows when to restart the container. Depending on your application, you may want to restart the container due to different reasons like fatal errors, deadlocks, or connection issues.

Using readiness probe kubelet knows when the container is ready to start handling incoming requests. Some of the reasons why your server wouldn't be able to start instantly are connecting to the database, running migrations, establishing a connection to third-party services.

git checkout 37d78c5

I have changes the code of the API server to emphasise the problem and make it easy to spot improperly configured container. Now, the API server starts with a whopping 60 seconds delay and calling /terminate endpoint kills the server but not the Node.js process. Build a new image and push it to the repository.

docker build -t registry.gitlab.com/michalzalecki/k8s-tutorial/api:v4 .
docker push registry.gitlab.com/michalzalecki/k8s-tutorial/api:v4

In a separate tab run watch curl, it allows you to see when the server is unable to accept traffic.

watch -n 1 curl https://api.mydevops.ovh/ping

Now, update the image version in api.yaml and apply changes.

In a few seconds from the deployment, you can notice that the server started to respond with 502 Bad Gateway. The API server is responsible for handling the traffic before it is ready to do so. Once the API server starts call twice the /terminate endpoint, still watching /ping responses in a separate tab.

curl https://api.mydevops.ovh/terminate

Now you should be getting 502 Bad Gateway errors again. The API server is unable to handle incoming traffic and hangs in this invalid state. To fix this issue and the one during the deployment, you have to configure liveness and readiness probes. There are different types of probes, but HTTP probe works well with web servers.

git checkout 63cc46d

Both probes can use /ping endpoint. In a more complex backend service, you can incorporate additional health check metrics.

- name: api
  image: registry.gitlab.com/michalzalecki/kubernetes-for-developers/api:v4
  readinessProbe:
    initialDelaySeconds: 40
    periodSeconds: 10
    failureThreshold: 5
    httpGet:
      path: /ping
      port: 3000
  livenessProbe:
    initialDelaySeconds: 80
    httpGet:
      path: /ping
      port: 3000

Apply changes and keep watching/ping responses in a separate tab.

watch kubectl get pods

By listing pods, you can observe how Kubernets terminates old pods once a new instance is ready to accept traffic. Calling /terminate also won't take down the API service, that is, as long as there's any other instance able to respond to /ping.

Wrap Up

I hope this set of experiment with Kubernetes gave you a better understanding of how different components can work together to create a more or less complete setup of your web project. Snippets in this tutorial act as a cheat sheet for myself, and maybe you also find them useful.

Feel free to comment if some steps didn't work for you. I'm always happy to make improvements and update the article.

Photo by Maximilian Weisbecker on Unsplash.