K3s - Kubernetes

Page content

Let’s give a Try with Kubernetes Mini, K3s

Source

Overview

TestSetup

Booting a few VM’s on my ESX Host. All of them got 4 CPU’s, 32 GB RAM, 100G Disk

1 MasterNode

  • Master

3 WorkerNodes

  • Worker01
  • Worker02
  • Worker03

All Maschines are Running Debian Latest, that’s Version 11.6 at the Moment

Setup Master

curl -sfL https://get.k3s.io | sh -
root@master:~/bin/test_kubernetes# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.25.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Show Nodes

kubectl get nodes
root@master:~/bin/test_kubernetes# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   70s   v1.25.4+k3s1

Show Token

cat /var/lib/rancher/k3s/server/node-token 
K10032f55153f52072a1e41f80f06551078dece476a44217e5a06facdfa6fd0f985::server:a70b4452634b7d2c4f9d33ab8808eb19

on all Worker Nodes

update hosts as root

cat << EOF >> /etc/hosts
192.168.100.249 kub11 master-node master
192.168.100.246 kub12 worker1
192.168.100.247 kub13 worker2
192.168.100.248 kub14 worker3

install Client and Attach to Server

TOKEN="K10032f55153f52072a1e41f80f06551078dece476a44217e5a06facdfa6fd0f985::server:a70b4452634b7d2c4f9d33ab8808eb19"
SERVER="master"
curl -sfL https://get.k3s.io | K3S_URL=https://${SERVER}:6443 K3S_TOKEN=${TOKEN} sh -

check on Master

kubectl get nodes
root@master:/var/log# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master     Ready    control-plane,master   19m     v1.25.4+k3s1
worker12   Ready    <none>                 3m23s   v1.25.4+k3s1
worker13   Ready    <none>                 41s     v1.25.4+k3s1
worker14   Ready    <none>                 41s     v1.25.4+k3s1

Deploying Kubernetes Dashboard

GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml
root@master:~# GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

create admin user

cat << EOF > dashboard.admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

create admin role

cat << EOF > dashboard.admin-user-role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Deploy Admin User

k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

get Bearer Token

k3s kubectl -n kubernetes-dashboard create token admin-user
root@master:~/dashboard# k3s kubectl -n kubernetes-dashboard create token admin-user
eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ...

Start Dashboard

k3s kubectl proxy
k3s kubectl proxy --address='0.0.0.0'

http://ip-of-master:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

-> not working …

Install TinyProxy

apt-get install tinyproxy

run again

k3s kubectl proxy

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

-> not working

Access via ClusterIP and TinyProxy

kubectl get all -n kubernetes-dashboard
kubectl get all -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-64bcc67c9c-6wbsd   1/1     Running   0          31m
pod/kubernetes-dashboard-66c887f759-dfqv8        1/1     Running   0          15m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes-dashboard        ClusterIP   10.43.4.227    <none>        443/TCP    31m
service/dashboard-metrics-scraper   ClusterIP   10.43.31.184   <none>        8000/TCP   31m

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           31m
deployment.apps/kubernetes-dashboard        1/1     1            1           31m

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-64bcc67c9c   1         1         1       31m
replicaset.apps/kubernetes-dashboard-66c887f759        1         1         1       15m
replicaset.apps/kubernetes-dashboard-5c8bd6b59         0         0         0       31m

https://10.43.4.227/#/login via TinyProxy .. Enter Token .. in!

Access Cluster via API

curl http://localhost:8001/api/
root@master:~# curl http://localhost:8001/api/
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.100.249:6443"
    }
  ]
}

Install k3d

on macos, docker must be running

brew install k3d
k3d cluster create mycluster
user@macos:~> k3d cluster create mycluster
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-mycluster'              
INFO[0000] Created image volume k3d-mycluster-images    
INFO[0000] Starting new tools node...                   
INFO[0001] Creating node 'k3d-mycluster-server-0'       
INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' 
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.25.3-k3s1' 
INFO[0002] Starting Node 'k3d-mycluster-tools'          
INFO[0007] Creating LoadBalancer 'k3d-mycluster-serverlb' 
INFO[0008] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' 
INFO[0015] Using the k3d-tools node to gather environment information 
INFO[0015] Starting new tools node...                   
INFO[0015] Starting Node 'k3d-mycluster-tools'          
INFO[0016] Starting cluster 'mycluster'                 
INFO[0016] Starting servers...                          
INFO[0016] Starting Node 'k3d-mycluster-server-0'       
INFO[0021] All agents already running.                  
INFO[0021] Starting helpers...                          
INFO[0021] Starting Node 'k3d-mycluster-serverlb'       
INFO[0027] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap... 
INFO[0029] Cluster 'mycluster' created successfully!    
INFO[0029] You can now use it like this:                
kubectl cluster-info

cluster info

user@macos:~> kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:63966
CoreDNS is running at https://0.0.0.0:63966/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:63966/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

Any Comments ?

sha256: 4e47d18358225963300002a04a3ef7d871e504bdb0dd8338eddbf6eb960bce78