贵港市城乡住房建设厅网站淘宝店铺运营
本文内容以语雀为准
文档
- 等等,Docker 被 Kubernetes 弃用了?
- 容器运行时
- 端口和协议
- kubeadm init
- kubeadm config
- 安装网络策略驱动
- 使用 kubeadm 创建集群
- 控制平面节点隔离
- 持久卷
- 为容器设置环境变量
- 在CentOS上安装Docker引擎
- Pod 网络无法访问排查处理
说明
- 本文以 CentOS 7.9、k8s 1.25.3(文章首次发布于2022-10-30,是当时的最新版)为例
- 本文固定了 k8s 的版本,防止不同版本存在差异,当你了解了某一版本的安装与使用,自己就可以尝试其他版本的安装了
- 2022-11-18,经过测试,当前时间的最新版:1.25.4,同样适用于本文章
- 由于 k8s 1.24 及之后的版本使用的是 containerd,之前的版本是 docker,故此文都安装并配置了,可以修改 k8s 的版本号进行学习、测试。
| | 控制面板 | node 节点 |
| — | — | — |
| 主机名 | k8s | node-1 |
| IP | 192.168.80.60 | 192.168.80.16 |
安装
- 安装所需工具
sudo yum -y install vim
sudo yum -y install wget
- 将主机名指向本机IP,主机名只能包含:字母、数字、-(横杠)、.(点)
- 获取主机名
hostname
- 临时设置主机名
hostname 主机名
- 永久设置主机名
sudo echo '主机名' > /etc/hostname
- 编辑 hosts
sudo vim /etc/hosts
控制面板:设置IP
192.168.80.60 k8s
node 节点:设置IP
192.168.80.16 node-1
- 安装并配置 ntpdate,同步时间
sudo yum -y install ntpdate
sudo ntpdate ntp1.aliyun.com
sudo systemctl status ntpdate
sudo systemctl start ntpdate
sudo systemctl status ntpdate
sudo systemctl enable ntpdate
- 安装并配置 bash-completion,添加命令自动补充
sudo yum -y install bash-completion
source /etc/profile
- 关闭防火墙、或者开通指定端口
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
# 控制面板
firewall-cmd --zone=public --add-port=6443/tcp --permanent # Kubernetes API server 所有
firewall-cmd --zone=public --add-port=2379/tcp --permanent # etcd server client API kube-apiserver, etcd
firewall-cmd --zone=public --add-port=2380/tcp --permanent # etcd server client API kube-apiserver, etcd
firewall-cmd --zone=public --add-port=10250/tcp --permanent # Kubelet API 自身, 控制面
firewall-cmd --zone=public --add-port=10259/tcp --permanent # kube-scheduler 自身
firewall-cmd --zone=public --add-port=10257/tcp --permanent # kube-controller-manager 自身
firewall-cmd --zone=trusted --add-source=192.168.80.60 --permanent # 信任集群中各个节点的IP
firewall-cmd --zone=trusted --add-source=192.168.80.16 --permanent # 信任集群中各个节点的IP
firewall-cmd --add-masquerade --permanent # 端口转发
firewall-cmd --reload
firewall-cmd --list-all
firewall-cmd --list-all --zone=trusted# 工作节点
firewall-cmd --zone=public --add-port=10250/tcp --permanent # Kubelet API 自身, 控制面
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent # NodePort Services† 所有
firewall-cmd --zone=trusted --add-source=192.168.80.60 --permanent # 信任集群中各个节点的IP
firewall-cmd --zone=trusted --add-source=192.168.80.16 --permanent # 信任集群中各个节点的IP
firewall-cmd --add-masquerade --permanent # 端口转发
firewall-cmd --reload
firewall-cmd --list-all
firewall-cmd --list-all --zone=trusted
- 关闭交换空间
sudo swapoff -a
sudo sed -i 's/.*swap.*/#&/' /etc/fstab
- 关闭 selinux
getenforce
cat /etc/selinux/config
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat /etc/selinux/config
- 安装 Containerd、Docker
Docker 不是必须的,k8s 1.24.0 开始使用 Containerd 替代 Docker,但还是推荐安装 Docker,原因:在k8s中构建Docker镜像时使用,需要在GitLab Runner 中配置如下,详情参见:https://www.yuque.com/xuxiaowei-com-cn/gitlab-k8s/gitlab-runner-k8s
[[runners]]...[runners.kubernetes]...[runners.kubernetes.volumes][[runners.kubernetes.volumes.host_path]]name = "docker"mount_path = "/var/run/docker.sock"host_path = "/var/run/docker.sock"
/etc/containerd/config.toml 中的 SystemdCgroup = true 的优先级高于 /etc/docker/daemon.json 中的 cgroupdriver
# https://docs.docker.com/engine/install/centos/
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum --showduplicates list docker-ce
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo yum install -y containerd# 启动 docker 时,会启动 containerd
# sudo systemctl status containerd.service
sudo systemctl stop containerd.servicesudo cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
sudo containerd config default > $HOME/config.toml
sudo cp $HOME/config.toml /etc/containerd/config.toml
# 修改 /etc/containerd/config.toml 文件后,要将 docker、containerd 停止后,再启动
sudo sed -i "s#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
# https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd-systemd
# 确保 /etc/containerd/config.toml 中的 disabled_plugins 内不存在 cri
sudo sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.tomlsudo systemctl enable --now containerd.service
# sudo systemctl status containerd.service# sudo systemctl status docker.service
sudo systemctl start docker.service
# sudo systemctl status docker.service
sudo systemctl enable docker.service
sudo systemctl enable docker.socket
sudo systemctl list-unit-files | grep dockersudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://hnkfbj7x.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
EOFsudo systemctl daemon-reload
sudo systemctl restart docker
sudo docker info
sudo systemctl status docker.service
sudo systemctl status containerd.service
- 添加阿里云 k8s 镜像仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# 是否开启本仓库
enabled=1
# 是否检查 gpg 签名文件
gpgcheck=0
# 是否检查 gpg 签名文件
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 安装 k8s 1.25.3 所需依赖
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 应用 sysctl 参数而不重新启动
sudo sysctl --system
# yum --showduplicates list kubelet --nogpgcheck
# yum --showduplicates list kubeadm --nogpgcheck
# yum --showduplicates list kubectl --nogpgcheck# 2023-02-07,经过测试,版本号:1.24.0,同样适用于本文章
# sudo yum install -y kubelet-1.24.0-0 kubeadm-1.24.0-0 kubectl-1.24.0-0 --disableexcludes=kubernetes --nogpgcheck# 如果你看到有人说 node 节点不需要安装 kubectl,其实这种说法是错的,kubectl 会被当做依赖安装,如果安装过程没有指定 kubectl 的版本,则会安装最新版的 kubectl,可能会导致程序运行异常
sudo yum install -y kubelet-1.25.3-0 kubeadm-1.25.3-0 kubectl-1.25.3-0 --disableexcludes=kubernetes --nogpgcheck# 2022-11-18,经过测试,版本号:1.25.4,同样适用于本文章
# sudo yum install -y kubelet-1.25.4-0 kubeadm-1.25.4-0 kubectl-1.25.4-0 --disableexcludes=kubernetes --nogpgcheck# 2023-02-07,经过测试,版本号:1.25.5,同样适用于本文章
# sudo yum install -y kubelet-1.25.5-0 kubeadm-1.25.5-0 kubectl-1.25.5-0 --disableexcludes=kubernetes --nogpgcheck# 2023-02-07,经过测试,版本号:1.25.6,同样适用于本文章
# sudo yum install -y kubelet-1.25.6-0 kubeadm-1.25.6-0 kubectl-1.25.6-0 --disableexcludes=kubernetes --nogpgcheck# 2023-02-07,经过测试,版本号:1.26.0,同样适用于本文章
# sudo yum install -y kubelet-1.26.0-0 kubeadm-1.26.0-0 kubectl-1.26.0-0 --disableexcludes=kubernetes --nogpgcheck# 2023-02-07,经过测试,版本号:1.26.1,同样适用于本文章
# sudo yum install -y kubelet-1.26.1-0 kubeadm-1.26.1-0 kubectl-1.26.1-0 --disableexcludes=kubernetes --nogpgcheck# 安装最新版,生产时不建议
# sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --nogpgchecksystemctl daemon-reload
sudo systemctl restart kubelet
sudo systemctl enable kubelet
- 查看kubelet日志
# k8s 未初始化时,kubelet 可能无法启动
journalctl -xefu kubelet
- 查看kubelet状态
# k8s 未初始化时,kubelet 可能无法启动
sudo systemctl status kubelet
- 已上命令需要在控制面板与node节点执行,并确保没有错误与警告
已上命令需要在控制面板与node节点执行,并确保没有错误与警告
已上命令需要在控制面板与node节点执行,并确保没有错误与警告
- 控制面板:初始化
kubeadm init --image-repository=registry.aliyuncs.com/google_containers
# 指定集群的IP
# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.80.60mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl cluster-info# 初始化失败后,可进行重置,重置命令:kubeadm reset# 执行成功后,会出现类似下列内容:
# kubeadm join 192.168.80.60:6443 --token f9lvrz.59mykzssqw6vjh32 \
# --discovery-token-ca-cert-hash sha256:4e23156e2f71c5df52dfd2b9b198cce5db27c47707564684ea74986836900107 #
# kubeadm token create --print-join-command
- node 节点:加入集群
# 运行的内容来自上方执行结果
kubeadm join 192.168.80.60:6443 --token f9lvrz.59mykzssqw6vjh32 \
--discovery-token-ca-cert-hash sha256:4e23156e2f71c5df52dfd2b9b198cce5db27c47707564684ea74986836900107 #
# kubeadm token create --print-join-command# kubeadm join 192.168.80.60:6443 --token f9lvrz.59mykzssqw6vjh32 \
# --discovery-token-unsafe-skip-ca-verification
- 控制面板:
kubectl get pods --all-namespaces -o wide
可以查看到 coredns-* 的状态是 Pending,nodes 为 NotReady,原因是网络还未配置
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-c676cc86f-4lncg 0/1 Pending 0 3m19s <none> <none> <none> <none>
kube-system coredns-c676cc86f-7n9wv 0/1 Pending 0 3m19s <none> <none> <none> <none>
kube-system etcd-k8s 1/1 Running 0 3m26s 192.168.80.60 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 0 3m23s 192.168.80.60 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 0 3m23s 192.168.80.60 k8s <none> <none>
kube-system kube-proxy-87lx5 1/1 Running 0 81s 192.168.0.18 centos-7-9-16 <none> <none>
kube-system kube-proxy-rctn6 1/1 Running 0 3m19s 192.168.80.60 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 0 3m23s 192.168.80.60 k8s <none> <none>
[root@k8s ~]#
kubectl get nodes -o wide
[root@k8s ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
centos-7-9-16 NotReady <none> 7m58s v1.25.3 192.168.0.18 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
k8s NotReady control-plane 10m v1.25.3 192.168.80.60 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
[root@k8s ~]#
- 控制面板:配置网络,选择 Calico 配置
| Kubernetes 版本 | Calico 版本 | Calico 文档 | |
| — | — | — | — |
| 1.18、1.19、1.20 | 3.18 | https://projectcalico.docs.tigera.io/archive/v3.18/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.18/manifests/calico.yaml |
| 1.19、1.20、1.21 | 3.19 | https://projectcalico.docs.tigera.io/archive/v3.19/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.19/manifests/calico.yaml |
| 1.19、1.20、1.21 | 3.20 | https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml |
| 1.20、1.21、1.22 | 3.21 | https://projectcalico.docs.tigera.io/archive/v3.21/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml |
| 1.21、1.22、1.23 | 3.22 | https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml |
| 1.21、1.22、1.23 | 3.23 | https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml |
| 1.22、1.23、1.24 | 3.24 | https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml |
| 1.22、1.23、1.24 | 3.25 | https://projectcalico.docs.tigera.io/archive/v3.25/getting-started/kubernetes/requirements | https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml |
# 下载
wget --no-check-certificate https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml
# 修改 calico.yaml 文件
vim calico.yaml
# 在 - name: CLUSTER_TYPE 下方添加如下内容
- name: CLUSTER_TYPEvalue: "k8s,bgp"# 下方为新增内容
- name: IP_AUTODETECTION_METHODvalue: "interface=网卡名称"
# 配置网络
kubectl apply -f calico.yaml
- 控制面板:查看 pods、nodes
kubectl get nodes -o wide
[root@k8s ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
centos-7-9-16 NotReady <none> 7m58s v1.25.3 192.168.0.18 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
k8s NotReady control-plane 10m v1.25.3 192.168.80.60 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
[root@k8s ~]#
kubectl get pods --all-namespaces -o wide
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-f79f7749d-rkqgw 0/1 Pending 0 11s <none> <none> <none> <none>
kube-system calico-node-7698p 0/1 Init:0/3 0 11s 192.168.80.60 k8s <none> <none>
kube-system calico-node-tvhnb 0/1 Init:0/3 0 11s 192.168.0.18 centos-7-9-16 <none> <none>
kube-system coredns-c676cc86f-4lncg 0/1 Pending 0 8m14s <none> <none> <none> <none>
kube-system coredns-c676cc86f-7n9wv 0/1 Pending 0 8m14s <none> <none> <none> <none>
kube-system etcd-k8s 1/1 Running 0 8m21s 192.168.80.60 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 0 8m18s 192.168.80.60 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 0 8m18s 192.168.80.60 k8s <none> <none>
kube-system kube-proxy-87lx5 1/1 Running 0 6m16s 192.168.0.18 centos-7-9-16 <none> <none>
kube-system kube-proxy-rctn6 1/1 Running 0 8m14s 192.168.80.60 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 0 8m18s 192.168.80.60 k8s <none> <none>
[root@k8s ~]#
控制面板:等待几分钟后,再次查看 pods、nodes
kubectl get nodes -o wide
[root@k8s ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
centos-7-9-16 Ready <none> 23m v1.25.3 192.168.80.16 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
k8s Ready control-plane 25m v1.25.3 192.168.80.60 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.9
[root@k8s ~]#
kubectl get pods --all-namespaces -o wide
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-f79f7749d-rkqgw 1/1 Running 2 (52s ago) 17m 172.16.77.9 k8s <none> <none>
kube-system calico-node-7698p 0/1 Running 2 (52s ago) 17m 192.168.80.60 k8s <none> <none>
kube-system calico-node-tvhnb 0/1 Running 0 17m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system coredns-c676cc86f-4lncg 1/1 Running 2 (52s ago) 25m 172.16.77.8 k8s <none> <none>
kube-system coredns-c676cc86f-7n9wv 1/1 Running 2 (52s ago) 25m 172.16.77.7 k8s <none> <none>
kube-system etcd-k8s 1/1 Running 2 (52s ago) 25m 192.168.80.60 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 2 (52s ago) 25m 192.168.80.60 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 2 (52s ago) 25m 192.168.80.60 k8s <none> <none>
kube-system kube-proxy-87lx5 1/1 Running 1 (<invalid> ago) 23m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system kube-proxy-rctn6 1/1 Running 2 (52s ago) 25m 192.168.80.60 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 2 (52s ago) 25m 192.168.80.60 k8s <none> <none>
[root@k8s ~]#
- 至此,k8s安装与配置已完成,下面内容是测试。
- 控制面板:创建 nginx 服务
vim nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.23.2ports:- containerPort: 80
kubectl apply -f nginx.yaml# 编辑
# kubectl edit deployment nginx-deployment
kubectl get pods --all-namespaces -o wide
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-86956f97b8-nfv2l 0/1 ContainerCreating 0 15s <none> centos-7-9-16 <none> <none>
default nginx-deployment-86956f97b8-x26kx 0/1 ContainerCreating 0 15s <none> centos-7-9-16 <none> <none>
kube-system calico-kube-controllers-f79f7749d-rkqgw 1/1 Running 2 (6m22s ago) 23m 172.16.77.9 k8s <none> <none>
kube-system calico-node-7698p 0/1 Running 2 (6m22s ago) 23m 192.168.80.60 k8s <none> <none>
kube-system calico-node-tvhnb 0/1 Running 0 23m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system coredns-c676cc86f-4lncg 1/1 Running 2 (6m22s ago) 31m 172.16.77.8 k8s <none> <none>
kube-system coredns-c676cc86f-7n9wv 1/1 Running 2 (6m22s ago) 31m 172.16.77.7 k8s <none> <none>
kube-system etcd-k8s 1/1 Running 2 (6m22s ago) 31m 192.168.80.60 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 2 (6m22s ago) 31m 192.168.80.60 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 2 (6m22s ago) 31m 192.168.80.60 k8s <none> <none>
kube-system kube-proxy-87lx5 1/1 Running 1 (<invalid> ago) 29m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system kube-proxy-rctn6 1/1 Running 2 (6m22s ago) 31m 192.168.80.60 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 2 (6m22s ago) 31m 192.168.80.60 k8s <none> <none>
[root@k8s ~]#
kubectl get pods -o wide
[root@k8s ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-86956f97b8-nfv2l 0/1 ContainerCreating 0 35s <none> centos-7-9-16 <none> <none>
nginx-deployment-86956f97b8-x26kx 0/1 ContainerCreating 0 35s <none> centos-7-9-16 <none> <none>
[root@k8s ~]#
控制面板:**几分钟后再查看**
kubectl get pods --all-namespaces -o wide
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-86956f97b8-nfv2l 1/1 Running 0 3m30s 172.16.132.193 centos-7-9-16 <none> <none>
default nginx-deployment-86956f97b8-x26kx 1/1 Running 0 3m30s 172.16.132.194 centos-7-9-16 <none> <none>
kube-system calico-kube-controllers-f79f7749d-rkqgw 1/1 Running 2 (9m37s ago) 26m 172.16.77.9 k8s <none> <none>
kube-system calico-node-7698p 0/1 Running 2 (9m37s ago) 26m 192.168.80.60 k8s <none> <none>
kube-system calico-node-tvhnb 0/1 Running 0 26m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system coredns-c676cc86f-4lncg 1/1 Running 2 (9m37s ago) 34m 172.16.77.8 k8s <none> <none>
kube-system coredns-c676cc86f-7n9wv 1/1 Running 2 (9m37s ago) 34m 172.16.77.7 k8s <none> <none>
kube-system etcd-k8s 1/1 Running 2 (9m37s ago) 34m 192.168.80.60 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 2 (9m37s ago) 34m 192.168.80.60 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 2 (9m37s ago) 34m 192.168.80.60 k8s <none> <none>
kube-system kube-proxy-87lx5 1/1 Running 1 (<invalid> ago) 32m 192.168.80.16 centos-7-9-16 <none> <none>
kube-system kube-proxy-rctn6 1/1 Running 2 (9m37s ago) 34m 192.168.80.60 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 2 (9m37s ago) 34m 192.168.80.60 k8s <none> <none>
[root@k8s ~]#
kubectl get pods -o wide
[root@k8s ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-86956f97b8-nfv2l 1/1 Running 0 4m31s 172.16.132.193 centos-7-9-16 <none> <none>
nginx-deployment-86956f97b8-x26kx 1/1 Running 0 4m31s 172.16.132.194 centos-7-9-16 <none> <none>
[root@k8s ~]#
# 控制面板:查看pod,svc
kubectl get pod,svc -o wide
[root@k8s ~]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-86956f97b8-nfv2l 1/1 Running 0 4m59s 172.16.132.193 centos-7-9-16 <none> <none>
pod/nginx-deployment-86956f97b8-x26kx 1/1 Running 0 4m59s 172.16.132.194 centos-7-9-16 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m <none>
[root@k8s ~]#
# 控制面板:设置服务
kubectl expose deployment nginx-deployment --type=NodePort --name=nginx-service
# 控制面板:查看pod,svc
kubectl get pod,svc -o wide
[root@k8s ~]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-86956f97b8-nfv2l 1/1 Running 0 7m58s 172.16.132.193 centos-7-9-16 <none> <none>
pod/nginx-deployment-86956f97b8-x26kx 1/1 Running 0 7m58s 172.16.132.194 centos-7-9-16 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m <none>
service/nginx-service NodePort 10.109.120.77 <none> 80:30593/TCP 55s app=nginx
[root@k8s ~]#
# 重启控制面板、node节点
# 控制面板:查看pod,svc
kubectl get pod,svc -o wide
[root@k8s ~]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-86956f97b8-nfv2l 1/1 Running 1 (<invalid> ago) 11m 172.16.132.196 centos-7-9-16 <none> <none>
pod/nginx-deployment-86956f97b8-x26kx 1/1 Running 1 (<invalid> ago) 11m 172.16.132.195 centos-7-9-16 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42m <none>
service/nginx-service NodePort 10.109.120.77 <none> 80:30593/TCP 4m8s app=nginx
[root@k8s ~]#
可以看到:重启前后 pod/nginx-deployment- IP 发生了变化,service/nginx-service 的 IP 与 端口没有发生变化,可在后面使用 service/nginx-service 的 端口*
Token 相关命令
- 控制平面节点上运行以下命令来获取令牌
kubeadm token list
- 默认情况下,令牌会在 24 小时后过期,可以通过在控制平面节点上运行以下命令来创建新令牌
kubeadm token create
相关命令
- 查看更多信息
-o wide
- 查看所有命名空间
--all-namespaces
- 查看指定命名空间
-n 命名空间
- 查看所有 pod
kubectl get pods --all-namespaces -o wide
- 查看 pod 描述
kubectl -n 命名空间 describe pod 名称
- 删除 pod
kubectl -n 命名空间 delete pod 名称
- 进入 pod
kubectl exec -it pod名称 bash
- 查看 Service Account
kubectl get sa --all-namespaces
kubectl -n 命名空间 get sa
- 查看 pv
kubectl get pv
- 查看 pvc
kubectl get pvc
- 查看角色绑定
kubectl get rolebinding --all-namespaces -o wide
错误说明
- 提示:/proc/sys/net/bridge/bridge-nf-call-iptables
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
# 执行命令
# 如果报错 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory,可以先执行 modprobe br_netfilter
sysctl -w net.bridge.bridge-nf-call-iptables=1
- 提示:/proc/sys/net/ipv4/ip_forward
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
# 执行命令
sysctl -w net.ipv4.ip_forward=1
- 控制面板(master)作为node使用(去污)
注意:此处的命令可能和你在网上看到去污命令不同,原因是k8s的版本不同
# https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#control-plane-node-isolation
kubectl taint nodes --all node-role.kubernetes.io/control-plane-# 1.24.0 版本需要使用下列命令去污
# kubectl taint nodes --all node-role.kubernetes.io/master-
**可使用下列命令查看当前软件的去污的命令参数**
kubectl get no -o yaml | grep taint -A 10