CCP1 Day5

Page content

LAB

Goal of this LAB is to deploy a small Todo App on microk8s. The App consists of three components:

  • Redis DB
  • node JS Rest API
  • node JS based frontend

All images can be found on Docker Hub. First step is a deployment on ZHAWs Could LAB. For how to run your own microk8s instance check out the end of this post.
run this shellscript to fetch and run all yaml files needed: (run at your own risk)

wget "https://blog.kitetrail.net/CCP1_Day5/launch_todo_app.sh"

wget "

Start Kubernetes

user@host:~$ microk8s.start

Check status:

ubuntu@kube:~$ microk8s.kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Create Services

api-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    component: api
  name: api-svc
spec:
  ports:
  - port: 8081
    targetPort: 8081
    name: api
  selector:
    app: todo
    component: api
  type: ClusterIP

redis-svc.yaml looks the same as api-svc.yaml just with a different port Number

frontend-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    component: frontend
  name: frontend-svc
spec:
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 30080
    name: frontend
  selector:
    app: todo
    component: frontend
  type: NodePort

nodePort exposes the service to the outside world

start Services:

user@host:~$ microk8s.kubectl create -f api-svc.yaml
user@host:~$ microk8s.kubectl create -f frontend-svc.yaml

Create Pods

redis-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    component: redis
    app: todo
spec:
  containers:
  - name: redis
    image: redis:3.2.10-alpine
    ports:
    - containerPort: 6379
    resources:
      limits:
        cpu: 100m
    args:
    - redis-server
    - --requirepass ccp2
    - --appendonly yes

api-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: api
  labels:
    component: api
    app: todo
spec:
  containers:
  - name: api
    image: icclabcna/ccp2-k8s-todo-api
    ports:
    - containerPort: 8081
    resources:
      limits:
        cpu: 100m
    env:
    - name: REDIS_ENDPOINT
      value: redis-svc
    - name: REDIS_PWD
      value: ccp2

frontend-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: frontend
  labels:
    component: frontend
    app: todo
spec:
  containers:
  - name: frontend
    image: icclabcna/ccp2-k8s-todo-frontend
    ports:
    - containerPort: 8080
    resources:
      limits:
        cpu: 100m
    env:
    - name: API_ENDPOINT_URL
      value: http://api-svc.default.svc.cluster.local:8081

start Pods:

user@host:~$ microk8s.kubectl create -f api-pod.yaml
user@host:~$ microk8s.kubectl create -f frontend-pod.yaml

Deployments

Use deployments instead of Pods to run multiple instances of a Pod.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
  labels:
    app: todo
    component: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: todo
      component: frontend
  template:
    metadata:
      labels:
        app: todo
        component: frontend
    spec:
      containers:
      - name: frontend
        image: icclabcna/ccp2-k8s-todo-frontend
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 100m
        env:
        - name: API_ENDPOINT_URL
          value: http://api-svc.default.svc.cluster.local:8081

Scaling

Manual

kubectl scale --replicas=3 -f api-deploy.yaml
kubectl scale --replicas=4 rs/api-deployment      #doesn't seem to work

Auto Scaling

kubectl autoscale deployment api-deployment --cpu-percent=90 --min=2 --max=10

Create Load

kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://192.168.1.145:30080/all; done"

Check performance

tom@microk8s:~$ kubectl top pod
NAME                                  CPU(cores)   MEMORY(bytes)
api-deployment-7bc7c4df6b-bp8fw       162m         94Mi
api-deployment-7bc7c4df6b-dch2z       174m         96Mi
api-deployment-7bc7c4df6b-dj569       135m         96Mi
api-deployment-7bc7c4df6b-v86sz       156m         95Mi
frontend-deployment-c7f47897c-2txdt   95m          96Mi
frontend-deployment-c7f47897c-6ww29   87m          94Mi
frontend-deployment-c7f47897c-z7z8t   87m          95Mi
load-generator                        32m          1Mi
load-generator2                       246m         3Mi
redis-deployment-5cc67dfdf4-f4hlw     161m         3Mi
tom@microk8s:~$ kubectl top node
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
microk8s   2223m        55%    3168Mi          26%

Good to know

Run a command inside a container:

kubectl exec -it pod_name <command>    #deprecated
kubectl exec --stdin --tty api-deployment-7bc7c4df6b-bp8fw -- /bin/bash

Debugging

ubuntu@kube2:~/kube$ kubectl describe service frontend
Name:                     frontend-svc
Namespace:                default
Labels:                   component=frontend
Annotations:              <none>
Selector:                 app=todo,component=frontend
Type:                     NodePort
IP Families:              <none>
IP:                       10.152.183.121
IPs:                      10.152.183.121
Port:                     frontend  8080/TCP
TargetPort:               8080/TCP
NodePort:                 frontend  30080/TCP
Endpoints:                10.1.2.70:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Check logs of container:

ubuntu@k8s:~$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
redis-deployment-5cc67dfdf4-bg7jt      1/1     Running   0          11m
api-deployment-cd78847f5-8kgxs         1/1     Running   0          10m
api-deployment-cd78847f5-s6b5p         1/1     Running   0          10m
frontend-deployment-589cffbfbf-hzh45   1/1     Running   0          10m
frontend-deployment-589cffbfbf-5wq8s   1/1     Running   0          10m
ubuntu@k8s:~$ kubectl logs -p api-deployment-cd78847f5-8kgxs

Command list

Command Description
microk8s.start|stop start, stop and check status of microk8s
microk8s status show enabled/disabled addons and node status
microk8s enable dns enable integrated dns server of microk8s (needed to address services by name)
microk8s.kubectl cluster-info Display cluster information
kubectl describe service [name] shows name,namespace, lables, ip and ports of server
kubectl describe pod [name] shows name,namespace, lables, ip, ports, all containers(pods), events,…
kubectl describe deployment [name] shows name,namespace, lables, Pod Template, events,…
kubectl top node|pod shows ressource consumption of nodes and pods

Setup your own Microk8s instance

I’m Running Ubuntu 20.04.3 LTS on ESXI 7.0. The second node runs on a Hetzner.com VM and connects to my Server through a Wireguard tunnel.

snap install microk8s --classic
microk8s enable dns
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml   #Metrics Server is needed for autoscaler
sudo usermod -a -G microk8s tom    #to be able to run kubectl without sudo
sudo chown -f -R tom ~/.kube       #to be able to run kubectl without sudo

Cluster over Wireguard Tunnel

Wireguard endpoint is OpenBSD; we need some rules for PF

/etc/pf.conf:

vpn = "wg0"
table <netvpn> { 10.0.0.0/24 10.0.1.0/24 }
table <vxlantep> { 10.0.1.3 }
microk8s = "{4789,16443,19001,25000}"
pass in quick on { $vpn } proto tcp from <netvpn> to any port $microk8s
pass in quick on { $vpn } proto udp from <vxlantep> to port vxlan

Add Node to Cluster

# On Master
microk8s add-node

# On new Node
 microk8s join 192.168.1.145:25000/de4eec4f0d0ad82dec2eba6811c8a93a/de2e14f9ea71