K8s 安装:修订间差异
跳到导航
跳到搜索
(→master) |
(→master) |
||
第104行: | 第104行: | ||
==== master ==== | ==== master ==== | ||
===== INIT ===== | |||
安装 calico 网络插件需要 pod-network-cidr | 安装 calico 网络插件需要 pod-network-cidr | ||
kubeadm init \ | kubeadm init \ | ||
第114行: | 第115行: | ||
* service-cidr: 内部 service 使用 IP 范围,不可与 pod 及 master 重复 | * service-cidr: 内部 service 使用 IP 范围,不可与 pod 及 master 重复 | ||
* pod-network-cidr: k8s pod 节点之间网络通信使用 IP 范围 | * pod-network-cidr: k8s pod 节点之间网络通信使用 IP 范围 | ||
<small><small>Your Kubernetes control-plane has initialized successfully! | |||
To start using your cluster, you need to run the following as a regular user: | |||
mkdir -p $HOME/.kube | |||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |||
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |||
Alternatively, if you are the root user, you can run: | |||
export KUBECONFIG=/etc/kubernetes/admin.conf | |||
You should now deploy a pod network to the cluster. | |||
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: | |||
https://kubernetes.io/docs/concepts/cluster-administration/addons/ | |||
Then you can join any number of worker nodes by running the following on each as root: | |||
kubeadm join 192.168.0.249:6443 --token z1q425.gy4kgpp491c8nkq2 --discovery-token-ca-cert-hash sha256:0b02fa4069856afb9d17dba76527b7e7c630d799cc3c00c3cc36c8beaec0128c </small></small> | |||
===== calico ===== | |||
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml | |||
kubectl apply -f calico.yaml | |||
===== kube-flannel ===== | |||
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | |||
kubectl apply -f kube-flannel.yml | |||
==== Node ==== | ==== Node ==== |
2024年3月26日 (二) 09:47的版本
环境准备
- 关闭 selinux 及 firewalld
- 关闭 Swap
host
192.168.0.158 np0 192.168.0.229 np1 192.168.0.249 np2 192.168.0.148 np3
设置网桥参数
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 user.max_user_namespaces=28633 EOF sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
配置支持 IPVS
加载 ip_vs 内核模块。kube-proxy 通过采用 iptables + ipset + ipvs 的方式实现为符合条件的 Pod 提供负载均衡。否则 kube-proxy 会退回到 iptables 模式。
cat > /etc/modules-load.d/ip_vs.conf << EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4 EOF
modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4
导入模块
cat << EOF > /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
modprobe overlay modprobe br_netfilter
lsmod | grep overlay lsmod | grep br_netfilter
部署 Containerd
创建容器工具
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc
容器间网络通信
wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz mkdir -p /opt/cni/bin tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
Containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz tar Cxzvf /usr/local containerd-1.7.14-linux-amd64.tar.gz wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -o /usr/lib/systemd/system/containerd.service systemctl daemon-reload && systemctl enable containerd
mkdir /etc/containerd containerd config default > /etc/containerd/config.toml
cd /etc/containerd/ cp config.toml config.toml.orig vi config.toml [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true # false 修改为 true [plugins."io.containerd.grpc.v1.cri"] # sandbox_image = "registry.k8s.io/pause:3.8" sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8" [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["http://mirrors.ustc.edu.cn"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"] endpoint = ["http://hub-mirror.c.163.com"]
systemctl restart containerd netstat -nlput | grep containerd
kubernetes
repo
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
kubelet kubeadm kubectl
# yum list kubelet --showduplicates yum install kubelet kubeadm kubectl systemctl enable kubelet systemctl status kubelet 此时状态不正常,等到 init 或 join 后,状态正常。
master
INIT
安装 calico 网络插件需要 pod-network-cidr
kubeadm init \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version=1.28.2 \ --apiserver-advertise-address=192.168.0.249 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.2.0.0/16
- apiserver-advertise-address: master 主机 IP 地址
- service-cidr: 内部 service 使用 IP 范围,不可与 pod 及 master 重复
- pod-network-cidr: k8s pod 节点之间网络通信使用 IP 范围
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.249:6443 --token z1q425.gy4kgpp491c8nkq2 --discovery-token-ca-cert-hash sha256:0b02fa4069856afb9d17dba76527b7e7c630d799cc3c00c3cc36c8beaec0128c
calico
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml kubectl apply -f calico.yaml
kube-flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml