大厂后端开发需要掌握docker和k8s吗?

1天前 (02-25 20:09)阅读1回复0
zaibaike
zaibaike
  • 管理员
  • 注册排名1
  • 经验值185420
  • 级别管理员
  • 主题37084
  • 回复0
楼主

二进造安拆Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈

Kubernetes 开源不容易,帮手点个star,谢谢了

介绍

kubernetes(k8s)二进造高可用安拆摆设,撑持IPv4+IPv6双栈。

我利用IPV6的目标是在公网停止拜候,所以我设置装备摆设了IPV6静态地址。

若您没有IPV6情况,或者不想利用IPv6,不合错误主机停止设置装备摆设IPv6地址即可。

不设置装备摆设IPV6,不影响后续,不外集群照旧是撑持IPv6的。为后期留有扩展可能性。

若不要IPv6 ,不给网卡设置装备摆设IPv6即可,不要对IPv6相关设置装备摆设删除或操做,不然会出问题。

https://github.com/cby-chen/Kubernetes/releases

手动项目地址:https://github.com/cby-chen/Kubernetes

脚本项目地址:https://github.com/cby-chen/Binary_installation_of_Kubernetes

强烈建议在Github上查看文档。Github出问题会更新文档,而且后续尽可能第一时间更新新版本文档。

1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和 1.24.1 和 1.24.2 和 1.24.3 和 1.25.0 文档以及安拆包已生成。

1.情况主机名称IP地址

申明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、

kubelet、kube-proxy、nfs-clientMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、

kubelet、kube-proxy、nfs-clientMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、

kubelet、kube-proxy、nfs-clientNode01192.168.1.64node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.65node节点kubelet、kube-proxy、nfs-clientNode03192.168.1.66node节点kubelet、kube-proxy、nfs-clientNode04192.168.1.67node节点kubelet、kube-proxy、nfs-clientNode05192.168.1.68node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.70Lb01节点haproxy、keepalivedLb02192.168.1.75Lb02节点haproxy、keepalived192.168.1.69VIP软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.0etcdv3.5.4containerdv1.6.8cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

安拆包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.25.0/kubernetes-v1.25.0.tar

1.1.k8s根底系统情况设置装备摆设 1.2.设置装备摆设IP ssh root@192.168.1.100 "nmcli con mod ens18 ipv4.addresses 192.168.1.61/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.106 "nmcli con mod ens18 ipv4.addresses 192.168.1.62/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.110 "nmcli con mod ens18 ipv4.addresses 192.168.1.63/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.114 "nmcli con mod ens18 ipv4.addresses 192.168.1.64/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.115 "nmcli con mod ens18 ipv4.addresses 192.168.1.65/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.116 "nmcli con mod ens18 ipv4.addresses 192.168.1.66/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.117 "nmcli con mod ens18 ipv4.addresses 192.168.1.67/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.118 "nmcli con mod ens18 ipv4.addresses 192.168.1.68/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.156 "nmcli con mod ens18 ipv4.addresses 192.168.1.70/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.160 "nmcli con mod ens18 ipv4.addresses 192.168.1.75/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" # 没有IPv6选择不设置装备摆设即可 ssh root@192.168.1.61 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::10; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.62 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::20; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.63 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::30; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.64 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::40; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.65 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::50; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.66 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::60; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.67 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::70; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.68 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::80; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.70 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::90; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" ssh root@192.168.1.75 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::100; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18" 1.3.设置主机名 hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03 hostnamectl set-hostname k8s-node04 hostnamectl set-hostname k8s-node05 hostnamectl set-hostname lb01 hostnamectl set-hostname lb02 1.4.设置装备摆设yum源 # 关于 CentOS 7 sudo sed -e s|^mirrorlist=|#mirrorlist=|g \ -e s|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 关于 CentOS 8 sudo sed -e s|^mirrorlist=|#mirrorlist=|g \ -e s|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 关于私有仓库 sed -e s|^mirrorlist=|#mirrorlist=|g -e s|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g -i.bak /etc/yum.repos.d/CentOS-*.repo 1.5.安拆一些必备东西 yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y 1.6.选择性下载需要东西 1.下载kubernetes1.25.+的二进造包 github二进造包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md wget https://dl.k8s.io/v1.25.0/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进造包 github二进造包下载地址:https://github.com/etcd-io/etcd/releases wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz 3.containerd二进造包下载 github下载地址:https://github.com/containerd/containerd/releases 4.containerd下载时下载带cni插件的二进造包。 wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz 5.下载cfssl二进造包 github二进造包下载地址:https://github.com/cloudflare/cfssl/releases wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进造下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz 1.7.封闭防火墙 systemctl disable --now firewalld 1.8.封闭SELinux setenforce 0 sed -i s#SELINUX=enforcing#SELINUX=disabled#g /etc/selinux/config 1.9.封闭交换分区 sed -ri s/.*swap.*/#&/ /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 0 1.10.收集设置装备摆设(俩种体例二选一) # 体例一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 体例二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager 1.11.停止时间同步 (lb除外) # 办事端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 10.0.0.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.61 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #利用客户端停止验证 chronyc sources -v 1.12.设置装备摆设ulimit ulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF 1.13.设置装备摆设免密登录 yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65 192.168.1.66 192.168.1.67 192.168.1.68 192.168.1.70 192.168.1.75" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done 1.14.添加启用源 (lb除外) # 为 RHEL-8或 CentOS-8设置装备摆设源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 安拆 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安拆包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available 1.15.晋级内核至4.18版本以上 (lb除外) # 安拆最新的内核 # 我那里选择的是不变版kernel-ml 如需更新持久维护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml # 查看已安拆那些内核 rpm -qa | grep kernel kernel-core-4.18.0-358.el8.x86_64 kernel-tools-4.18.0-358.el8.x86_64 kernel-ml-core-5.16.7-1.el8.elrepo.x86_64 kernel-ml-5.16.7-1.el8.elrepo.x86_64 kernel-modules-4.18.0-358.el8.x86_64 kernel-4.18.0-358.el8.x86_64 kernel-tools-libs-4.18.0-358.el8.x86_64 kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64 # 查看默认内核 grubby --default-kernel /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64 # 若不是最新的利用号令设置 grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64 # 重启生效 reboot # v8 整合号令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合号令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.安拆ipvsadm (lb除外) yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs 1.17.修改内核参数 (lb除外) cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system 1.18.所有节点设置装备摆设hosts当地解析 cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 没有IPv6选择不设置装备摆设即可 2408:8207:78cc:5cc1:181c::10 k8s-master01 2408:8207:78cc:5cc1:181c::20 k8s-master02 2408:8207:78cc:5cc1:181c::30 k8s-master03 2408:8207:78cc:5cc1:181c::40 k8s-node01 2408:8207:78cc:5cc1:181c::50 k8s-node02 2408:8207:78cc:5cc1:181c::60 k8s-node03 2408:8207:78cc:5cc1:181c::70 k8s-node04 2408:8207:78cc:5cc1:181c::80 k8s-node05 2408:8207:78cc:5cc1:181c::90 lb01 2408:8207:78cc:5cc1:181c::100 lb02 192.168.1.61 k8s-master01 192.168.1.62 k8s-master02 192.168.1.63 k8s-master03 192.168.1.64 k8s-node01 192.168.1.65 k8s-node02 192.168.1.66 k8s-node03 192.168.1.67 k8s-node04 192.168.1.68 k8s-node05 192.168.1.70 lb01 192.168.1.75 lb02 192.168.1.69 lb-vip EOF 2.k8s根本组件安拆 2.1.所有k8s节点安拆Containerd做为Runtime # wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz #创建cni插件所需目次 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进造包 tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/ # wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz #解压 tar -xzf cri-containerd-cni-1.6.8-linux-amd64.tar.gz -C / #创建办事启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF 2.1.1设置装备摆设Containerd所需的模块 cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF 2.1.2加载模块 systemctl restart systemd-modules-load.service 2.1.3设置装备摆设Containerd所需的内核 cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system 2.1.4创建Containerd的设置装备摆设文件 # 创建默认设置装备摆设文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改Containerd的设置装备摆设文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep sandbox_image 2.1.5启动并设置为开机启动 systemctl daemon-reload systemctl enable --now containerd 2.1.6设置装备摆设crictl客户端毗连的运行时位置 # wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz #解压 tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/ #生成设置装备摆设文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info 2.2.k8s与etcd下载及安拆(仅在master01操做) 2.2.1解压k8s安拆包 # 下载安拆包 # wget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz # 解压k8s安拆文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安拆文件 tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ # 查看/usr/local/bin下内容 ls /usr/local/bin/ containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy 2.2.2查看版本 [root@k8s-master01 ~]# kubelet --version Kubernetes v1.25.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.4 API version: 3.5 [root@k8s-master01 ~]# 2.2.3将组件发送至其他k8s节点 Master=k8s-master02 k8s-master03 Work=k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05 for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin 2.3创建证书相关文件 mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by kubelet ." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF 3.相关证墨客成 # master01节点下载证墨客成东西 # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有 cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl cp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson 3.1.生成etcd证书

出格申明除外,以下操做在所有master节点操做

3.1.1所有master节点创建证书存放目次 mkdir /etc/etcd/ssl -p 3.1.2master01节点生成etcd证书 cd pki # 生成etcd证书和etcd证书的key(若是你觉得以后可能会扩容,能够在ip那多写几个预留出来) # 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.61,192.168.1.62,192.168.1.63,2408:8207:78cc:5cc1:181c::10,2408:8207:78cc:5cc1:181c::20,2408:8207:78cc:5cc1:181c::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd 3.1.3将证书复造到其他节点 Master=k8s-master02 k8s-master03 for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done 3.2.生成k8s相关证书

出格申明除外,以下操做在所有master节点操做

3.2.1所有k8s节点创建证书存放目次 mkdir -p /etc/kubernetes/pki 3.2.2master01节点生成k8s证书 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 生成一个根证书 ,多写了一些IP做为预留IP,为未来添加node做筹办 # 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.69为高可用vip地址 # 若没有IPv6 可删除可保留 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.1.69,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.1.66,192.168.1.67,192.168.1.68,192.168.1.75,192.168.1.75,10.0.0.40,10.0.0.41,2408:8207:78cc:5cc1:181c::10,2408:8207:78cc:5cc1:181c::20,2408:8207:78cc:5cc1:181c::30,2408:8207:78cc:5cc1:181c::40,2408:8207:78cc:5cc1:181c::50,2408:8207:78cc:5cc1:181c::60,2408:8207:78cc:5cc1:181c::70,2408:8207:78cc:5cc1:181c::80,2408:8207:78cc:5cc1:181c::90,2408:8207:78cc:5cc1:181c::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver 3.2.3生成apiserver聚合证书 cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,能够忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client 3.2.4生成controller-manage的证书 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个情况项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认情况 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig 3.2.5创建kube-proxy证书 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig 3.2.5创建ServiceAccount Key ——secret openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub 3.2.6将证书发送到其他master节点 #其他节点创建目次 # mkdir /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done 3.2.7查看证书 ls /etc/kubernetes/pki/ admin.csr ca.csr front-proxy-ca.csr kube-proxy.csr scheduler-key.pem admin-key.pem ca-key.pem front-proxy-ca-key.pem kube-proxy-key.pem scheduler.pem admin.pem ca.pem front-proxy-ca.pem kube-proxy.pem apiserver.csr controller-manager.csr front-proxy-client.csr sa.key apiserver-key.pem controller-manager-key.pem front-proxy-client-key.pem sa.pub apiserver.pem controller-manager.pem front-proxy-client.pem scheduler.csr # 一共26个就对了 ls /etc/kubernetes/pki/ |wc -l 26 4.k8s系统组件设置装备摆设 4.1.etcd设置装备摆设 4.1.1master01设置装备摆设 # 若是要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: k8s-master01 data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: https://192.168.1.61:2380 listen-client-urls: https://192.168.1.61:2379,http://127.0.0.1:2379 max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: https://192.168.1.61:2380 advertise-client-urls: https://192.168.1.61:2379 discovery: discovery-fallback: proxy discovery-proxy: discovery-srv: initial-cluster: k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380 initial-cluster-token: etcd-k8s-cluster initial-cluster-state: new strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: off proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true peer-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem peer-client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF 4.1.2master02设置装备摆设 # 若是要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: k8s-master02 data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: https://192.168.1.62:2380 listen-client-urls: https://192.168.1.62:2379,http://127.0.0.1:2379 max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: https://192.168.1.62:2380 advertise-client-urls: https://192.168.1.62:2379 discovery: discovery-fallback: proxy discovery-proxy: discovery-srv: initial-cluster: k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380 initial-cluster-token: etcd-k8s-cluster initial-cluster-state: new strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: off proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true peer-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem peer-client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF 4.1.3master03设置装备摆设 # 若是要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: k8s-master03 data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: https://192.168.1.63:2380 listen-client-urls: https://192.168.1.63:2379,http://127.0.0.1:2379 max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: https://192.168.1.63:2380 advertise-client-urls: https://192.168.1.63:2379 discovery: discovery-fallback: proxy discovery-proxy: discovery-srv: initial-cluster: k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380 initial-cluster-token: etcd-k8s-cluster initial-cluster-state: new strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: off proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true peer-transport-security: cert-file: /etc/kubernetes/pki/etcd/etcd.pem key-file: /etc/kubernetes/pki/etcd/etcd-key.pem peer-client-cert-auth: true trusted-ca-file: /etc/kubernetes/pki/etcd/etcd-ca.pem auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF 4.2.创建service(所有master节点操做) 4.2.1创建etcd.service并启动 cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF 4.2.2创建etcd证书目次 mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd 4.2.3查看etcd形态 # 若是要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.63:2379,192.168.1.62:2379,192.168.1.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.63:2379 | c0c8142615b9523f | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.62:2379 | de8396604d2c160d | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.61:2379 | 33c9d6df0037ab97 | 3.5.4 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用设置装备摆设 5.1在lb01和lb02两台办事器上操做 5.1.1安拆keepalived和haproxy办事 systemctl disable --now firewalld setenforce 0 sed -i s#SELINUX=enforcing#SELINUX=disabled#g /etc/selinux/config yum -y install keepalived haproxy 5.1.2修改haproxy设置装备摆设文件(两台设置装备摆设文件一样) # cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.61:6443 check server k8s-master02 192.168.1.62:6443 check server k8s-master03 192.168.1.63:6443 check EOF 5.1.3lb01设置装备摆设keepalived master节点 #cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER # 留意网卡名 interface ens18 mcast_src_ip 192.168.1.70 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.69 } track_script { chk_apiserver } } EOF 5.1.4lb02设置装备摆设keepalived backup节点 # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 留意网卡名 interface ens18 mcast_src_ip 192.168.1.75 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.69 } track_script { chk_apiserver } } EOF 5.1.5安康查抄脚本设置装备摆设(两台lb主机) cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本受权 chmod +x /etc/keepalived/check_apiserver.sh 5.1.6启动办事 systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived 5.1.7测试高可用 # 能ping同 [root@k8s-node02 ~]# ping 192.168.1.69 # 能telnet拜候 [root@k8s-node02 ~]# telnet 192.168.1.69 8443 # 封闭主节点,看vip能否漂移到备节点 6.k8s组件设置装备摆设(区别于第4点)

所有k8s节点创建以下目次

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes 6.1.创建apiserver(所有master节点) 6.1.1master01节点设置装备摆设 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF 6.1.2master02节点设置装备摆设 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.62 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF 6.1.3master03节点设置装备摆设 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.63 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF 6.1.4启动apiserver(所有master节点) systemctl daemon-reload && systemctl enable --now kube-apiserver # 留意查看形态能否启动一般 # systemctl status kube-apiserver 6.2.设置装备摆设kube-controller-manager service # 所有master节点设置装备摆设,且设置装备摆设不异 # 172.16.0.0/12为pod网段,按需求设置你本身的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --cluster-cidr=172.16.0.0/12,fc00::/48 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=64 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF 6.2.1启动kube-controller-manager,并查看形态 systemctl daemon-reload systemctl enable --now kube-controller-manager # systemctl status kube-controller-manager 6.3.设置装备摆设kube-scheduler service 6.3.1所有master节点设置装备摆设,且设置装备摆设不异 cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF 6.3.2启动并查看办事形态 systemctl daemon-reload systemctl enable --now kube-scheduler # systemctl status kube-scheduler 7.TLS Bootstrapping设置装备摆设 7.1在master01上设置装备摆设 cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,若是修改的话到那个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config 7.2查看集群形态,没问题的话继续后续操做 kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记施行,别忘记!!! kubectl create -f bootstrap.secret.yaml 8.node节点设置装备摆设 8.1.在master01上将证书复造到node节点 cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done 8.2.kubelet设置装备摆设 8.2.1所有k8s节点创建相关目次 mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点设置装备摆设kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd [Install] WantedBy=multi-user.target EOF 8.2.2所有k8s节点创建kubelet的设置装备摆设文件 cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF 8.2.3启动kubelet systemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet 8.2.4查看集群 [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 18s v1.24.3 k8s-master02 Ready <none> 16s v1.24.3 k8s-master03 Ready <none> 16s v1.24.3 k8s-node01 Ready <none> 14s v1.24.3 k8s-node02 Ready <none> 14s v1.24.3 k8s-node03 Ready <none> 14s v1.24.3 k8s-node04 Ready <none> 14s v1.24.3 k8s-node05 Ready <none> 14s v1.24.3 [root@k8s-master01 ~]# 8.3.kube-proxy设置装备摆设 8.3.1将kubeconfig发送至其他节点 for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done 8.3.2所有k8s节点添加kube-proxy的service文件 cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF 8.3.3所有k8s节点添加kube-proxy的设置装备摆设 cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF 8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy 9.安拆Calico 9.1以下步调只在master01操做 9.1.1更改calico网段 # vim calico.yaml vim calico-ipv6.yaml # calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true" # 当地没有IPv6 利用 calico.yaml # kubectl apply -f calico.yaml # 当地有IPv6 利用 calico-ipv6.yaml kubectl apply -f calico-ipv6.yaml 9.1.2查看容器形态 # calico 初始化会很慢 需要耐心期待一下,大约非常钟摆布 [root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-56cdb7c587-bq6rn 1/1 Running 0 10m kube-system calico-node-2vt27 1/1 Running 0 10m kube-system calico-node-7bg82 1/1 Running 0 10m kube-system calico-node-gg9tv 1/1 Running 0 10m kube-system calico-node-lkdhr 1/1 Running 0 10m kube-system calico-node-msl6j 1/1 Running 0 10m kube-system calico-node-电话rx9 1/1 Running 0 10m kube-system calico-node-tgzxk 1/1 Running 0 10m kube-system calico-node-z59jx 1/1 Running 0 10m kube-system calico-typha-6775694657-xzmcs 1/1 Running 0 10m [root@k8s-master01 ~]# 10.安拆CoreDNS 10.1以下步调只在master01操做 10.1.1修改文件 cd coredns/ cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安拆 kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created 11.安拆Metrics Server 11.1以下步调只在master01操做 11.1.1安拆Metrics-server

在新版的Kubernetes中系统资本的收罗均利用Metrics-server,能够通过Metrics收罗节点和Pod的内存、磁盘、CPU和收集的利用率

# 安拆metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等半晌查看形态 kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证 12.1摆设pod资本 cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s 12.2用pod解析默认定名空间中的kubernetes kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local 12.3测试跨定名空间能否能够解析 kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local 12.4每个节点都必需要能拜候Kubernetes的kubernetes svc 443和kube-dns的service 53 telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is ^]. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is ^]. curl 10.96.0.10:53 curl: (52) Empty reply from server 12.5Pod和Pod之前要能通 kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.64 PING 192.168.1.64 (192.168.1.64): 56 data bytes 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms # 能够连通证明那个pod是能够跨定名空间和跨主机通信的 12.6创建三个副本,能够看到3个副天职布在差别的节点上(用完能够删了) cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安拆dashboard wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml kubectl apply -f dashboard.yaml kubectl apply -f dashboard-user.yaml 13.1更改dashboard的svc为NodePort,若是已是请忽略 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort 13.2查看端标语 kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s 13.3创建token kubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6Inlkd0RKV2lQeUpvNmRxb2hENDlRM3llWU55T2I4dC0wVW5KOU5PZGRSdWsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU1NzA2MTQwLCJpYXQiOjE2NTU3MDI1NDAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZGVhYjdiY2MtNDczZS00N2E0LThlYTUtZmE4Yjc2NGY2NGJjIn19LCJuYmYiOjE2NTU3MDI1NDAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.YzVrnSq3IuWn3电话Y_td7SPqHisT40Gk1neMx7Ok9PsTxd6RASWxv9Y_1-T4wpE3ljaCiXxMBETzvYDgf-y9FOxm6drkQWWLk9UUuvOdjexxkdTXztB5X_0BiUGcMlvD3CA0qFbnzcg1cLpokypkuOnlSB8GBTleNyhQvHQnoXU3fSUCNRR_zHu2bRNgJZwABPMdj2D42EQndD56ZDP4g4IK8iMVJbaM-6DdNjdpfQx2358n8syPDjznu_1W1fUvwxY3eoEyeuIEjDbEeYEwh5uW2k4NOjW8m54W2YgmipDuqpvIB_-cnAo_KzF2q1Qb4WpIAItGkkpgwQFMFagKRTg 13.3登录dashboard

https://192.168.1.61:30034/

14.ingress安拆 14.1施行摆设 cd ingress/ kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创建完成后在施行: kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.62 80 7s 14.2过滤查看ingress端口 [root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]# 15.IPv6测试 #摆设应用 [root@k8s-master01 ~]# vim cby.yaml [root@k8s-master01 ~]# cat cby.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 [root@k8s-master01 ~]# kubectl apply -f cby.yaml #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #利用内网拜候 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.1.61:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #利用公网拜候 [root@localhost yaml]# curl -I http://[2408:8207:78cc:5cc1:181c::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes 16.安拆号令行主动补全功用 yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc

关于

https://www.oiox.cn/ https://www.oiox.cn/index.php/start-page.html CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、小我博客

全网可搜《小陈运维》

文章次要发布于微信公家号:《Linux运维交换社区》

0
回帖

大厂后端开发需要掌握docker和k8s吗? 期待您的回复!

取消