3. Configure Cluster - 2023.1 English

AMD-Xilinx Kubernetes Device Plugin

Release Date
2023-06-10
Version
2023.1 English
3.1 Disable swap (this step need to be done on all your nodes)
$ sudo swapoff -a

Note: If there is no enough space on system, please try disable swap and remove the swap file.

This command only temporary disable swap, run this command each time after reboot the machine.

3.2 Build Master Node

Step 1: Init master node

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Note: For issues like: “The connection to the server localhost:8080 was refused - did you specify the right host or port?”

Check port status:

$ netstat -nltp | grep apiserver

Adding environment variable in ~/.bash_profile

$ export KUBECONFIG=/etc/kubernetes/admin.conf
$ source ~/.bash_profile

Step 2: configure flannel

install flannel (for Kubernetes version 1.7+)

$ sysctl net.bridge.bridge-nf-call-iptables=1``
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml``

For other version, please refer https://github.com/coreos/flannel to do the configuration.

Step 3: check the pod

$ sudo kubectl get pod -n kube-system -o wide

3.3 Adding worker node

If there are multiple server machines or AWS instances you want to add into cluster as worker node, you need to follow our perivous step install matched version of kubectl kubeadm and kubelet.

Login to your master node, use following command to get your token command for joining cluster:

$ kubeadm token create --print-join-command

You will get a output command like :

$ kubeadm join 192.168.54.128:6443 --token [token generated by Kubernetes]  --discovery-token-ca-cert-hash [sha256 token generated by Kubernetes]

Insert the output command on the worker nodes you want to add into cluster. Then checking adding result on master node with command:

$ kubectl get node

3.4 Control plane node isolation (optional)

By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to schedule Pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:

$ kubectl taint nodes --all node-role.kubernetes.io/master-

For Kubernetes version 1.25+, use following command instead to isolate the master node:

$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-

3.5 Labeling your node (Optional)

If you want to create a pod that gets scheduled to you chosen node, you need to label the node first and setting nodeSelector in your pod yaml file.

Choose your nodes, and add a label to it: $ kubectl label nodes <your-node-name> disktype=ssd

Verify that your chosen node has a disktype=ssd label: $ kubectl get nodes --show-labels

The output is similar to this:

NAME      STATUS    ROLES    AGE     VERSION        LABELS
Master    Ready     master   1d      v1.22.2        ...,kubernetes.io/hostname=master
worker0   Ready     <none>   1d      v1.22.2        ...,disktype=ssd,kubernetes.io/hostname=worker0
worker1   Ready     <none>   1d      v1.22.2        ...,kubernetes.io/hostname=worker1
worker2   Ready     <none>   1d      v1.22.2        ...,kubernetes.io/hostname=worker2

This example pod configuration yaml file describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.

apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
    env: test
spec:
containers:
- name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
nodeSelector:
    disktype: ssd