Setup Your Own Kubernetes Cluster with K3s

Image for post
Image for post
Photo by Christina @ wocintechchat.com on Unsplash

For this tutorial, two virtual machines running Ubuntu 20.04.1 LTS have been used.

If there is a need for an on-premise Kubernetes cluster, then K3s seems to be a nice option because there is just one small binary to install per node.

Please note, that I have blanked out all domain-names, IP-addresses and so forth for privacy reasons.

On the machine that is going to be the main node install the K3s binary:

curl -sfL https://get.k3s.io | sh -

Get the node token which is needed in the next step:

cat /var/lib/rancher/k3s/server/node-token

On each machine that is going to be a worker node:

curl -sfL https://get.k3s.io | K3S_URL=https://kubernetes01.domain.de:6443 K3S_TOKEN=<the-token-from-the-step-before> sh -

Back on the main node check if all nodes are there:

$ sudo k3s kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes01.domain.de Ready control-plane,master 18m v1.20.0+k3s2
kubernetes02.domain.de Ready <none> 93s v1.20.0+k3s2

For a first test, some NGINX instances are being deployed to the cluster with the following manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 10
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Save this into a file nginx-deployment.yml and install it:

kubectl apply -f nginx-deployment.yml

In total there should be now 10 pods running an instance of NGINX. Why 10? Just for the fun of it :-) Adapt to your liking.

Check where those pods are running with this command:

kubectl get pods -l app=nginx --output=wide

This step is the first of two to make the “internal” deployment accessible from outside the cluster.

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx

When the above manifest is in place, e.g. nginx-service.yml, it is applied with:

kubectl apply -f nginx-service.yml

Check if it worked:

$ sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 00.0.0.0 <none> 443/TCP 4h9m
nginx ClusterIP 00.00.000.000 <none> 80/TCP 13

But it needs to be load balanced, right?

Right!

After creating the service, the NGINX instances can be made accessible from outside by creating an ingress based on HAProxy.

The original manifest provided by HAProxy is rather long, therefore it is not repeated here.

Anyways, it can be directly installed from its source:

$ kubectl apply -f https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/v1.4/deploy/haproxy-ingress.yaml

Again, check if everything went fine:

$ sudo kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress <none> * 00.000.0.000 80 2m38s

Additionally, let’s “talk” to one of the pods:

$ curl 10.199.7.120/nginx
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>

The 404is just fine. All NGINX instances are "empty".

EDIT 2021–02–15

The 404 is NOT fine. Something went wrong here. An empty nginx should show a welcome page. Something went wrong, but this post is outdated anyways.

It is a lot more convenient to control the cluster from ones local terminal without having to ssh into the master node.

Copy the contents of /etc/rancher/k3s/k3s.yaml from the main node and add them to the local ~/.kube/config.

Replace “localhost” with the IP or name of the main K3s node.

On the local machine, change the context with kubectl set-context <yourcontext>

Check, for example by retrieving all pods

kubectl get pods -o wide

Documenting my Tech-Stack

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store