(Note: All information is from the internet)
A personal note about installing k8s on Raspberry Pi4
|
A messy setup before got the cluster case.. |
In my setup, the cluster run on an isolated LAN connected via a fast switch. The cluster can be reached from outside via the Wifi connection.
A) On Master Node
1.
install docker and change
dockerd cgroup to
systemd
# curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker
After install the docker, update the line /lib/
systemd/system/docker
.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
2. Add a default route so that
kubeadm doesn't
complains, because
- I use
a isolated network
10.10.10.0/16 for inter-nodes communication.
-
Connection to
Internet using wifi wlan0 interface 192.168.1.0/24 network
-
this is to avoid the
cert generate using the 192.168.1
.x IP and services bound to that... wlan0
#
sudo ip route add default via 10.10.10.10
Further connection has to jump from other 10.10.10.x nodes (via 192.168.1.11 / 10.10.10.11 to 10.10.10.10)
3.
kubeadm reset
kubeadm reset is not necessary if you do a fresh installation
4. Aim to use flannel as CNI, so start with
$
sudo kubeadm init --control-plane-endpoint 10.10.10.10 --pod-network-
cidr=10.244.0.0/16
5.
remove & recreate the
.kube
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6. Next keep the message of installation below:
---
You should now deploy a pod network to the cluster.
Run "
kubectl apply -f [
podnetwork]
.yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service
account keys on each node and then running the following as root:
kubeadm join 10.10.10.10:6443 --token gavux7
.hgj1dk2xpfuulggg \
--
discovery-token-
ca-
cert-hash sha256
:248dea9ecc6691b8d2738fd3571a560e468caadb3ab19a56b494652912373b9e \
--
control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.10.10:6443 --token gavux7
.hgj1dk2xpfuulggg \
--
discovery-token-
ca-
cert-hash sha256
:248dea9ecc6691b8d2738fd3571a560e468caadb3ab19a56b494652912373b9e
---
7.
install flannel
Since we need to connect to
Internet, we need to disable to default route set in step (2)
One of the easier
way is "reboot" and back to normal state before
continue
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
Check there is a new network interface create
8. Check the installation
pi@rbpi4-n0:~/system_setup $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rbpi4-n0 Ready master 8m34s v1.16.2
pi@rbpi4-n0:~/system_setup $
9. Setup Worker nodes in section (B)
10. Setup dashboard
$ kube apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
11. Check status
pi@rbpi4-n0:~/system_setup $ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5644d7b6d9-rbdqr 1/1 Running 0 22m 10.244.0.2 rbpi4-n0 <none> <none>
kube-system coredns-5644d7b6d9-xpftj 1/1 Running 0 22m 10.244.0.3 rbpi4-n0 <none> <none>
kube-system etcd-rbpi4-n0 1/1 Running 1 21m 192.168.1.10 rbpi4-n0 <none> <none>
kube-system kube-apiserver-rbpi4-n0 1/1 Running 1 21m 192.168.1.10 rbpi4-n0 <none> <none>
kube-system kube-controller-manager-rbpi4-n0 1/1 Running 1 21m 192.168.1.10 rbpi4-n0 <none> <none>
kube-system kube-flannel-ds-arm-4cmf2 1/1 Running 0 14m 192.168.1.10 rbpi4-n0 <none> <none>
kube-system kube-flannel-ds-arm-9vgkb 1/1 Running 0 6m20s 192.168.1.11 rbpi4-n1 <none> <none>
kube-system kube-flannel-ds-arm-znm4h 1/1 Running 0 2m27s 192.168.1.12 rbpi4-n2 <none> <none>
kube-system kube-proxy-kffjb 1/1 Running 0 2m27s 192.168.1.12 rbpi4-n2 <none> <none>
kube-system kube-proxy-qxt92 1/1 Running 1 22m 192.168.1.10 rbpi4-n0 <none> <none>
kube-system kube-proxy-rm97m 1/1 Running 0 6m20s 192.168.1.11 rbpi4-n1 <none> <none>
kube-system kube-scheduler-rbpi4-n0 1/1 Running 1 21m 192.168.1.10 rbpi4-n0 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-566cddb686-x5k9c 0/1 ContainerCreating 0 94s <none> rbpi4-n1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-vkknw 0/1 ContainerCreating 0 95s <none> rbpi4-n1 <none> <none>
pi@rbpi4-n0:~/system_setup $
B) On Worker Node
1) Reset and start again
pi@rbpi4-n1:~ $ sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1028 18:51:58.108270 24830 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
To avoid complication, advise to reboot the node and start again. Clean the /etc/cabinet files except the manifest
2) Join the cluster as worker node (not control plane node)
$ sudo -i
# root@rbpi4-n1:~# kubeadm join 10.10.10.10:6443 --token gavux7.hgj1dk2xpfuulggg \
> --discovery-token-ca-cert-hash sha256:248dea9ecc6691b8d2738fd3571a560e468caadb3ab19a56b494652912373b9e
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to
apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run '
kubectl get nodes' on the control-plane to see this node
join the cluster
(Repeat the same for all worker
node)