CentOS 7安装指定版本kubernetes
2021/5/12 7:27:15
本文主要是介绍CentOS 7安装指定版本kubernetes,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
我们知道kubernetes.io有文档指导我们如何安装kubernetes,例如kubeadm,按照此方法默认安装最新版本的kubernetes,例如当前(2021-05-11)为v1.21.0
如果是学生党搞搞学习或者研究党搞搞研究,那自然是最新的稳定版本最好了,但是作为参加工作的研发人员,事情往往就没那么简单了,产品使用的版本一般都落后于开源版本,半年甚至更久,所以产品研发人员一般需要部署指定版本的kubernetes用于验证某些功能,这就是本文要讲述的内容
安装kubeadm、kubelet和kubectl
参照kubeadm,当执行到“安装kubeadm、kubelet和kubectl”这一步时,会创建一个repo文件:/etc/yum.repos.d/kubernetes.repo,然后执行:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes安装最新的稳定版本,我们应该在这里区分开来,具体操作如下:
- 从进入到“安装kubeadm、kubelet和kubectl”这一步开始
- 从repo文件内容中可以找到baseurl=xxx,我们打开repos,ctrl+F搜索kubernetes,找到对应的架构,例如我是aarch64,就选择kubernetes-el7-aarch64
- 点击进入后有两个链接,点击repodata,然后点击primary.xml
- 从primary.xml中找到需要的版本,例如我需要的是1.16.0:
- kubectl:http://yum.kubernetes.io/pool/b89f9c89cf0163bfd4f3d1e3a747856fa77b0c8b0bdec747acab95789103560a-kubectl-1.16.0-0.aarch64.rpm
- kubelet:http://yum.kubernetes.io/pool/392b7313850b2cf63cd68d7a5ee6505d9d9e05c7e398d41c93b1e60bc9214310-kubelet-1.16.0-0.aarch64.rpm
- kubeadm:http://yum.kubernetes.io/pool/227fa407cae2ba5e79b3643f007bce0f7982a5c02a22ab2480e921304c85b355-kubeadm-1.16.0-0.aarch64.rpm
- yum安装找到的rpm
yum install -y http://yum.kubernetes.io/pool/b89f9c89cf0163bfd4f3d1e3a747856fa77b0c8b0bdec747acab95789103560a-kubectl-1.16.0-0.aarch64.rpm yum install -y http://yum.kubernetes.io/pool/392b7313850b2cf63cd68d7a5ee6505d9d9e05c7e398d41c93b1e60bc9214310-kubelet-1.16.0-0.aarch64.rpm yum install -y http://yum.kubernetes.io/pool/227fa407cae2ba5e79b3643f007bce0f7982a5c02a22ab2480e921304c85b355-kubeadm-1.16.0-0.aarch64.rpm
- kubeadm version确认安装情况
[root@ecs-57f4 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"}
注意:除了安装kubeadm、kubectl、kubelet外,其他步骤按照指导文档执行
使用kubeadm安装kubernetes集群
需要注意的是,通过--kubernetes-version指定kubernetes的版本,具体命令如下
kubeadm init --kubernetes-version 1.16.0 --pod-network-cidr 10.0.0.0/16
我使用的是crio v1.21.0作为容器运行时,遇到了kubelet跟crio的cgroup driver不匹配的问题,具体解决的办法是:systemctl status kubelet 可以看到Drop-In: /usr/lib/systemd/system/kubelet.service.d 下面kubeadm添加了一个配置文件:10-kubeadm.conf,修改改文件,增加KUBELET_CGROUP_ARGS(注意:需要改两行,一是新增Environment,二是ExecStart后面添加该参数):
# Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS
等待kubeadm安装完成即可
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 213.501765 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ecs-57f4 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ecs-57f4 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 7524n3.ctfh768j7xi6d4qy [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.45:6443 --token 7524n3.ctfh768j7xi6d4qy \ --discovery-token-ca-cert-hash sha256:8c244d0a30da0d0d950609ee8fddff5f28b1bd76de21ecfe682923830450655d [root@ecs-57f4 ~]#
此时我们执行:kubectl get node会发现node状态为NotReady,不要慌,安装一下网络插件即可
安装网络插件
我选择的是flannel,按照指定kubernetes 1.17+可以一条命令搞定,但我安装的是1.16.0,所以点击链接继续研究,发现:
For Kubernetes v1.16
kube-flannel.yaml
usesClusterRole
&ClusterRoleBinding
ofrbac.authorization.k8s.io/v1
. When you use Kubernetes v1.16, you should replacerbac.authorization.k8s.io/v1
torbac.authorization.k8s.io/v1beta1
becauserbac.authorization.k8s.io/v1
had become GA from Kubernetes v1.17.
所以安装脚本修改为:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i 's#rbac.authorization.k8s.io/v1#rbac.authorization.k8s.io/v1beta1#g' kube-flannel.yml kubectl apply -f kube-flannel.yml
然后执行命令:kubectl get po -A 等待所以pod编程Running即可
验证
[root@ecs-57f4 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ecs-57f4 Ready master 62m v1.16.0 192.168.0.45 <none> CentOS Linux 7 (AltArch) 4.18.0-80.7.2.el7.aarch64 cri-o://1.21.0 [root@ecs-57f4 ~]# [root@ecs-57f4 ~]# [root@ecs-57f4 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"} [root@ecs-57f4 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ecs-57f4 Ready master 62m v1.16.0 192.168.0.45 <none> CentOS Linux 7 (AltArch) 4.18.0-80.7.2.el7.aarch64 cri-o://1.21.0 [root@ecs-57f4 ~]# kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-5jqvz 1/1 Running 0 66m kube-system coredns-5644d7b6d9-zzn9c 1/1 Running 0 66m kube-system etcd-ecs-57f4 1/1 Running 0 65m kube-system kube-apiserver-ecs-57f4 1/1 Running 0 65m kube-system kube-controller-manager-ecs-57f4 1/1 Running 0 65m kube-system kube-flannel-ds-7vk9n 1/1 Running 0 62m kube-system kube-proxy-c5m8c 1/1 Running 0 66m kube-system kube-scheduler-ecs-57f4 1/1 Running 0 65m
我们再部署一个应用做简单的测试
[root@ecs-57f4 ~]# cat nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 1 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.io/library/nginx:1.14.2 ports: - containerPort: 80 [root@ecs-57f4 ~]# kubectl apply -f nginx-deployment.yaml deployment.apps/nginx created [root@ecs-57f4 ~]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-685d968c9-xzl6f 1/1 Running 0 5s [root@ecs-57f4 ~]# kubectl exec -it nginx-685d968c9-xzl6f -- ls -lh total 64K drwxr-xr-x 1 root root 4.0K Mar 27 2019 bin drwxr-xr-x 2 root root 4.0K Feb 3 2019 boot drwxr-xr-x 5 root root 360 May 11 09:13 dev drwxr-xr-x 1 root root 4.0K May 11 09:13 etc drwxr-xr-x 2 root root 4.0K Feb 3 2019 home drwxr-xr-x 1 root root 4.0K Mar 27 2019 lib drwxr-xr-x 2 root root 4.0K Mar 26 2019 media drwxr-xr-x 2 root root 4.0K Mar 26 2019 mnt drwxr-xr-x 2 root root 4.0K Mar 26 2019 opt dr-xr-xr-x 185 root root 0 May 11 09:13 proc drwx------ 2 root root 4.0K Mar 26 2019 root drwxr-xr-x 1 root root 4.0K May 11 09:13 run drwxr-xr-x 2 root root 4.0K Mar 26 2019 sbin drwxr-xr-x 2 root root 4.0K Mar 26 2019 srv dr-xr-xr-x 12 root root 0 May 11 09:13 sys drwxrwxrwt 1 root root 4.0K Mar 27 2019 tmp drwxr-xr-x 1 root root 4.0K Mar 26 2019 usr drwxr-xr-x 1 root root 4.0K Mar 26 2019 var [root@ecs-57f4 ~]#
参考链接
-
安装 kubeadm
-
使用 kubeadm 创建集群
-
flannel
-
繼上篇,排查 kubelet、kubeadm init 問題
这篇关于CentOS 7安装指定版本kubernetes的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-15在Kubernetes (k8s) 中搭建三台 Nginx 服务器怎么实现?-icode9专业技术文章分享
- 2024-11-05基于Kubernetes的自定义AWS云平台搭建指南
- 2024-11-05基于Kubernetes Gateway API的现代流量管理方案
- 2024-11-05在Kubernetes上部署你的第一个应用:Nginx服务器
- 2024-11-05利用拓扑感知路由控制Kubernetes中的流量
- 2024-11-05Kubernetes中的层次命名空间:更灵活的资源管理方案
- 2024-11-055分钟上手 Kubernetes:精简实用的 Kubectl 命令速查宝典!
- 2024-10-30K8s 容器的定向调度与亲和性
- 2024-10-28云原生周刊:K8s未来三大发展方向 丨2024.10.28
- 2024-10-25亚马逊弹性Kubernetes服务(EKS)实战:轻松搭建Kubernetes平台