当前位置:首页 » 《随便一记》 » 正文

部署K8S 1.31 (containerd)

6 人参与  2024年11月17日 14:01  分类 : 《随便一记》  评论

点击全文阅读


1、基础环境准备

当前国内无法访问docker.hub因此需要提前准备可以访问的镜像仓库

1、设备IP地址规划

名称IP地址系统
Master192.168.110.133Centos stream8
Slave01192.168.110.134Centos stream8
Slave02192.168.110.135Centos stream8

2、操作系统要求

# 1、关闭防火墙/SELINUXufw status ufw disabel# 2、禁用selinuxsed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0#3、关闭swap分区,K8S在使用CPU和内存为物理内存和CPU,Cgroup相关驱动无法对其进行有效管理sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a   # 查查是否关闭swap分区# 4、设置主机名称vim /etc/hosts192.168.110.133 Master192.168.110.134 Slave01192.168.110.135 Slave02# 5、同步时间#查看时区,时间date#先查看时区是否正常,不正确则替换为上海时区timedatectl set-timezone Asia/Shanghai#安装chrony,联网同步时间apt install chrony -y && systemctl enable --now chronyd# 6、配置桥接的IPV4流量传递到iptables的链cat >> /etc/sysctl.d/k8s.conf <<EOFnet.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1net.ipv4.ip_forward=1vm.swappiness=0EOFsysctl --system# 7、服务器之间设置免密登录ssh-keygen -t rsassh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.110.131ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.110.132

2、使用Kubeadm安装K8s(所有主机)

1、配置内核转发以及网桥过滤

# 创建加载内核模块 (主机启动后自动加载)cat << EOF | tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOF# 手动执行,加载模块modprobe overlaymodprobe br_netfilter# 查看以及已经加载模块lsmod | egrep "overlay"lsmod | egrep "br_netfilter"# 添加网桥过滤以及内核转发配置文件cat >> /etc/sysctl.d/k8s.conf <<EOFnet.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1net.ipv4.ip_forward=1vm.swappiness=0EOF# 加载内核sysctl --system

2、安装ipset 以及 ipvsadm

apt install ipset ipvsadm -ycat << EOF | tee /etc/modules-load.d/ipvs.confip_vsip_vs_rrip_vs_wrrip_vs_ship_vs_shnf_conntrackEOF# 创建模块加载脚本cat << EOF | tee ipvs.sh#!/bin/shmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- ip_conntrackEOF# 执行脚本,加载模块sh ipvs.sh

3、容器运行时与containerd(所有主机)

1、安装containerd(二进制安装)

# 1、安装containerdwget https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gztar xvf containerd-1.7.14-linux-amd64.tar.gz#解压出来一个bin目录,containerd可执行文件都在bin目录里面 mv bin/* /usr/local/bin/wget https://github.com/containerd/containerd/releases/download/v1.7.22/cri-containerd-1.7.22-linux-amd64.tar.gztar xf cri-containerd-1.7.22-linux-amd64.tar.gz -C /

2、Containerd配置文件修改,并启动containerd

mkdir /etc/containerdcontainerd config default > /etc/containerd/config.toml# 修改配置文件,修改pause版本,1.29以后为3.9vim /etc/containerd/config.tomlsandbox_image = "registry.k8s.io/pause:3.9"# 修改镜像仓库地址sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"# 修改指定内核驱动为Cgroup,139行,修改runc中的配置[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true#启动containerdsystemctl enabled --now containerd

4、K8S集群部署(所有主机)

1、下载安装文件

sudo apt-get update# apt-transport-https 可能是一个虚拟包(dummy package);如果是的话,你可以跳过安装这个包sudo apt-get install -y apt-transport-https ca-certificates curl gpgcurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg# 添加kubernetes仓库echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list# 镜像仓库apt-get update && apt-get install -y apt-transport-httpscurl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/deb/Release.key |    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/deb/ /" |    tee /etc/apt/sources.list.d/kubernetes.listapt-get updateapt-get install -y kubelet kubeadm kubectl# 获取版本apt-cache madison kubeadm# 安装指定版本apt install -y kubelet=1.31.0-00 kubeadm=1.31.0-00 kubectl=1.31.0-00#安装最新版本# 安装最新版本,并锁定版本sudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectl#锁定版本sudo apt-mark hold kubelet kubeadm kubectl# 取消版本锁定sudo apt-mark unhold kubelet kubeadm kubectl

2、配置Kubelet

# 为了实现容器运行时使用cgroupdrive与kubelet使用的cgroup的一致性vim /etc/default/kubeletKUBELET_EXTRA_ARGS="--cgroup-driver=systemd"# 设置kubelet为开机自启动,由于当前没有生成配置文件,集群初始化后自启动systemctl enable kubelet

3、Master准备初始化配置文件

1、通过kubeadm-config文件在线初始化

# 生成配置文件模板kubeadm  config print init-defaults > kubeadm-config.yaml# 修改yaml文件vim kubeadm-config.yamladvertiseAddress: 192.168.110.130name: MasterserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16# 使用阿里云镜像仓库# imageRepository: registry.aliyuncs.com/google_containers (如果不能直接拉去镜像的)#查看指定版本的镜像kubeadm config images list --kubernetes-version=v1.31.0registry.k8s.io/kube-apiserver:v1.31.1registry.k8s.io/kube-controller-manager:v1.31.1registry.k8s.io/kube-scheduler:v1.31.1registry.k8s.io/kube-proxy:v1.31.1registry.k8s.io/coredns/coredns:v1.11.3registry.k8s.io/pause:3.10registry.k8s.io/etcd:3.5.15-0# 使用阿里云镜像仓库kubeadm  config images list --kubernetes-version=v1.31.1 --image-repository=registry.aliyuncs.com/google_containersregistry.aliyuncs.com/google_containers/kube-apiserver:v1.31.1registry.aliyuncs.com/google_containers/kube-controller-manager:v1.31.1registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.1registry.aliyuncs.com/google_containers/kube-proxy:v1.31.1registry.aliyuncs.com/google_containers/coredns:v1.11.3registry.aliyuncs.com/google_containers/pause:3.10registry.aliyuncs.com/google_containers/etcd:3.5.15-0#下载镜像kubeadm  config images pull --kubernetes-version=v1.31.1#下载镜像(指定仓库)kubeadm  config images pull --kubernetes-version=v1.31.1 --image-repository=registry.aliyuncs.com/google_containers#crictl /crt images 查看下载的镜像ctr -n=k8s.io images list#从国内镜像拉取docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0docker pull coredns/coredns:1.8.6#重新打标签#将拉取下来的images重命名为kubeadm config所需的镜像名字docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 k8s.gcr.io/kube-apiserver:v1.23.8docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8 k8s.gcr.io/kube-controller-manager:v1.23.8docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8 k8s.gcr.io/kube-scheduler:v1.23.8docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8 k8s.gcr.io/kube-proxy:v1.23.8docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0docker tag coredns/coredns:1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6#使用部署文件出书化K8S集群kubeadm init --config kubeadm-config.yaml --upload-certs --v=9# 命令初始化集群kubeadm init --apiserver-advertise-address=192.168.110.137  --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16  --v=5  # 简化----------------------------------------------------------------------kubeadm init --apiserver-advertise-address=192.168.110.133 --control-plane-endpoint=control-plane-endpoint.k8s.local --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --service-dns-domain=k8s.local --upload-certs --v=5 # 完整然后按照系统提示指导部署集群

2、使用kubeadm init 离线初始化

# 当有本地无法联网时,使用该初始化方式1、远端主机获取对应K8S版本的包# 查看K8S版本的组件kubeadm config images list --kubernetes-version=v1.31.0registry.k8s.io/kube-apiserver:v1.31.1registry.k8s.io/kube-controller-manager:v1.31.1registry.k8s.io/kube-scheduler:v1.31.1registry.k8s.io/kube-proxy:v1.31.1registry.k8s.io/coredns/coredns:v1.11.3registry.k8s.io/pause:3.10registry.k8s.io/etcd:3.5.15-02、拉取镜像docker pull kube-apiserver:v1.23.8docker pull kube-controller-manager:v1.23.8docker pull kube-scheduler:v1.23.8docker pull kube-proxy:v1.23.8docker pull pause:3.6docker pull etcd:3.5.1-0docker pull coredns/coredns:1.8.6# 指定国内仓库docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0docker pull registry.cn-hangzhou.aliyuncs.com/google_containerscoredns/coredns:1.8.63、远端主机将镜像保存docker save -o kube-apiserver-v1.31.1.tar k8s.gcr.io/kube-apiserver:v1.31.1docker save -o kube-controller-manager-v1.31.1.tar k8s.gcr.io/kube-controller-manager:v1.31.1docker save -o kube-scheduler-v1.31.1.tar k8s.gcr.io/kube-scheduler:v1.31.1docker save -o kube-proxy-v1.31.1.tar k8s.gcr.io/kube-proxy:v1.31.1docker save -o pause-3.6.tar k8s.gcr.io/pause:3.6docker save -o etcd-3.5.4-0.tar k8s.gcr.io/etcd:3.5.4-0docker save -o coredns-v1.9.3.tar k8s.gcr.io/coredns/coredns:v1.9.34、使用scp或其他方式将tar包同步到本地主机scp  kube-apiserver-v1.31.1.tar            root@192.168.110.138:/root/scp  kube-controller-manager-v1.31.1.tar   root@192.168.110.138:/root/scp  kube-scheduler-v1.31.1.tar            root@192.168.110.138:/root/scp  kube-proxy-v1.31.1.tar                root@192.168.110.138:/root/scp  pause-3.6.tar                         root@192.168.110.138:/root/scp  etcd-3.5.4-0.tar                      root@192.168.110.138:/root/scp  coredns-v1.9.3.tar                    root@192.168.110.138:/root/5、本机主机导入镜像ctr -n=k8s.io images import /path/to/save/kube-apiserver-v1.31.1.tarctr -n=k8s.io images import /path/to/save/kube-controller-manager-v1.31.1.tarctr -n=k8s.io images import /path/to/save/kube-scheduler-v1.31.1.tarctr -n=k8s.io images import /path/to/save/kube-proxy-v1.31.1.tarctr -n=k8s.io images import /path/to/save/pause-3.6.tarctr -n=k8s.io images import /path/to/save/etcd-3.5.4-0.tarctr -n=k8s.io images import /path/to/save/coredns-v1.9.3.tar6、查看镜像是否导入ctr -n=k8s.io images list7、指定初始化文件,进行初始化kubeadm  config print init-defaults > kubeadm-config.yaml# 修改yaml文件vim kubeadm-config.yamladvertiseAddress: 192.168.110.130name: MasterserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"  # 负载均衡配置,按需配置替换为负载均衡器的 DNS 名称或 IP 地址,将 LOAD_BALANCER_PORT 替换为负载均衡器监听的端口(通常是 6443)。imageRepository: localhost:5000   # 指定本地镜像kubeadm init --config kubeadm-config.yaml --upload-certs --v=9# 如果直接使用registry.k8s.io仓库下载,使用命令初始化kubeadm init --apiserver-advertise-address=192.168.110.137   --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16  --v=5#如果是使用阿里云registry.aliyuncs.com/google_containers镜像仓库下载kubeadm init --apiserver-advertise-address=192.168.110.137  --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16  --v=5

5、部署网络插件

1、在线标准化部署calico

# 1、通过yaml文件部署wget https://docs.projectcalico.org/manifests/calico.yamlkubectl apply -f /root/calico.yaml# 2、查看状态kubectl get pods -n kube-system[root@Master01-Centos8 ~]# kubectl get pods -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-b8d8894fb-nmznx    1/1     Running   0          106mcalico-node-fn5q8                          1/1     Running   0          106mcalico-node-fn7rl                          1/1     Running   0          106mcalico-node-tkngk                          1/1     Running   0          106mcoredns-855c4dd65d-66vmm                   1/1     Running   0          42hcoredns-855c4dd65d-9779h                   1/1     Running   0          42hetcd-master01-centos8                      1/1     Running   0          42hkube-apiserver-master01-centos8            1/1     Running   0          42hkube-controller-manager-master01-centos8   1/1     Running   0          42hkube-proxy-5bprr                           1/1     Running   0          42hkube-proxy-6dnm2                           1/1     Running   0          42hkube-proxy-9d8gc                           1/1     Running   0          42hkube-scheduler-master01-centos8            1/1     Running   0          42h

2、离线标准化部署calico

# 当本地主机无法直接访问互联网时使用1、远端主机下载calico组件(假设远端主机有docker)docker pull calico/cni:v3.28.2docker pull calico/pod2daemon-flexvol:v3.28.2docker  pull calico/node:v3.28.2docker  pull calico/kube-controllers:v3.28.2docker  pull calico/typha:v3.28.22、将镜像保存到本地docker save  -o calico-cni-v3.28.2.tar calico/cni:v3.28.2docker save  -o calico-pod2daemon-flexvol-v3.28.2.tar calico/pod2daemon-flexvol:v3.28.2docker save  -o calico-node-v3.28.2.tar calico/node:v3.28.2docker save  -o calico-kube-controllers-v3.28.2.tar calico/kube-controllers:v3.28.2docker save  -o calico-typha-v3.28.2.tar calico/typha:v3.28.23、将镜像拷贝到本地主机(需要手动导入集群中所有主机)scp  calico-cni-v3.28.2.tar                   root@192.168.110.138:/root/scp  calico-pod2daemon-flexvol-v3.28.2.tar    root@192.168.110.138:/root/scp  calico-node-v3.28.2.tar                  root@192.168.110.138:/root/scp  calico-kube-controllers-v3.28.2.tar      root@192.168.110.138:/root/scp  calico-typha-v3.28.2.tar                 root@192.168.110.138:/root/4、将镜像导入本地主机(需要手动导入集群中所有主机)ctr -n=k8s.io image import /root/calico-cni-v3.28.2.tarctr -n=k8s.io image import /root/calico-pod2daemon-flexvol-v3.28.2.tarctr -n=k8s.io image import /root/calico-node-v3.28.2.tarctr -n=k8s.io image import /root/calico-kube-controllers-v3.28.2.tarctr -n=k8s.io image import /root/calico-typha-v3.28.2.tar#查看镜像是否导入(所有主机上都执行)ctr -n=k8s.io images list  | grep calico5、安装calico(master上执行)kubectl apply -f /root/calico.yaml6、查看状态[root@Master01-Centos8 ~]# kubectl get nodesNAME               STATUS   ROLES           AGE   VERSIONmaster01-centos8   Ready    control-plane   42h   v1.31.1slave01-centos8    Ready    <none>          42h   v1.31.1slave02-centos8    Ready    <none>          42h   v1.31.1[root@Master01-Centos8 ~]# kubectl get pods -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-b8d8894fb-nmznx    1/1     Running   0          114mcalico-node-fn5q8                          1/1     Running   0          114mcalico-node-fn7rl                          1/1     Running   0          114mcalico-node-tkngk                          1/1     Running   0          114mcoredns-855c4dd65d-66vmm                   1/1     Running   0          42hcoredns-855c4dd65d-9779h                   1/1     Running   0          42hetcd-master01-centos8                      1/1     Running   0          42hkube-apiserver-master01-centos8            1/1     Running   0          42hkube-controller-manager-master01-centos8   1/1     Running   0          42hkube-proxy-5bprr                           1/1     Running   0          42hkube-proxy-6dnm2                           1/1     Running   0          42hkube-proxy-9d8gc                           1/1     Running   0          42hkube-scheduler-master01-centos8            1/1     Running   0          42h

在任何节点执行kubectl命令

1、将master节点中 /etc/kubernetes/admin.conf   复制需要运行的主机的 /etc/kubernetes中2、对应主机上配置环境变量echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

6、安装dashboard

1、安装helm

1、二进制安装helm

wget https://get.helm.sh/helm-v3.16.0-linux-amd64.tar.gztar -zxvf helm-v3.16.0-linux-amd64.tar.gzmv linux-amd64/helm /usr/local/bin/helmhelm repo add bitnami https://charts.bitnami.com/bitnami

2、deb包安装helm

# 确保目录存在sudo mkdir -p /usr/share/keyrings # 下载、转换并保存 GPG 签名文件 curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null # 验证 GPG 签名 gpg --list-keys --keyring /usr/share/keyrings/helm.gpgsudo apt-get install apt-transport-https --yesecho "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.listsudo apt-get updateudo apt-get install helm

3、用脚本安装helm

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh./get_helm.sh

2、安装dashboard

# Add kubernetes-dashboard repository# helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/helm repo add bitnami https://charts.bitnami.com/bitnami  # 建议使用# Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard charthelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard# 卸载helm delete kubernetes-dashboard --namespace kubernetes-dashboard

3、使用dashboard

#需要提前确保kubernetes-dashboard命名空间已经存在,确定cluster-admin ClusterRole 存在# kubectl create namespace kubernetes-dashboard1、创建ServiceAccount 和一个 ClusterRoleBindingvim dashboard-adminuser.yaml apiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kubernetes-dashboard# 创建kubectl apply -f dashboard-adminuser.yaml# 验证创建情况kubectl get serviceaccount admin-user -n kubernetes-dashboard kubectl get clusterrolebinding admin-user2、ServiceAccount获取token# 获取简单的tokenkubectl -n kubernetes-dashboard create token admin-user# 获取长期token,需要修改dashbord-adminuser.yaml文件apiVersion: v1kind: Secretmetadata:  name: admin-user  namespace: kubernetes-dashboard  annotations:    kubernetes.io/service-account.name: "admin-user"   type: kubernetes.io/service-account-token  

7 安装kubesphere

helm upgrade --install -n kubesphere-system   --set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks  --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.1.tgz --debug --wait# 如果可以直接拉取镜像helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.1.tgz --debug --wait# 等待一会儿安装完成http://192.168.110.137:30880

点击全文阅读


本文链接:http://m.zhangshiyu.com/post/187659.html

<< 上一篇 下一篇 >>

  • 评论(0)
  • 赞助本站

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。

关于我们 | 我要投稿 | 免责申明

Copyright © 2020-2022 ZhangShiYu.com Rights Reserved.豫ICP备2022013469号-1