使用CNI网络插件(calico)实现docker容器跨主机互联
2023/6/9 18:22:23
本文主要是介绍使用CNI网络插件(calico)实现docker容器跨主机互联,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
- 一.系统环境
- 二.前言
- 三.CNI网络插件简介
- 四.常见的几种CNI网络插件对比
- 五.Calico网络之间是如何通信的
-
六.配置calico让物理机A上的docker容器c1可以访问物理机B上的docker容器c2
- 6.1 安装部署etcd集群
- 6.2 安装部署docker
- 6.3 配置calico
- 6.4 使用Calico实现Docker容器跨主机互联
- 七.Kubernetes(k8s)环境里的calico
- 八.总结
一.系统环境
本文主要基于Docker 20.10.12和Linux操作系统CentOS7.4。
服务器版本 | calico版本 | docker软件版本 | Kubernetes(k8s)集群版本 | CPU架构 |
---|---|---|---|---|
CentOS Linux release 7.4.1708 (Core) | v2.6.12 | Docker version 20.10.12 | v1.21.9 | x86_64 |
etcd集群架构:etcd1为leader,etcd2为follower,etcd3为follower。
服务器 | 操作系统版本 | CPU架构 | 进程 | 功能描述 |
---|---|---|---|---|
etcd1/192.168.110.133 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | leader |
etcd2/192.168.110.131 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | follower |
etcd3/192.168.110.132 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | follower |
Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点。
服务器 | 操作系统版本 | CPU架构 | 进程 | 功能描述 |
---|---|---|---|---|
k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master节点 |
k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker节点 |
k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker节点 |
二.前言
在Kubernetes集群中,容器之间的通信是非常重要的一部分。为了实现容器之间的跨主机互联,需要使用CNI网络插件。本文将介绍CNI网络插件的概念和常见的几种插件对比,并详细讲解如何使用Calico实现Docker容器跨主机互联。
calico的信息保存在etcd里,所以需要一套etcd集群,关于etcd集群的安装部署,可以查看博客《Kubernetes后台数据库etcd:安装部署etcd集群,数据备份与恢复》。
查看Kubernetes(k8s)环境里的calico的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html。
三.CNI网络插件简介
CNI(Containernetworking Interface)网络插件是一个由Linux基金会维护的开源项目,它可以为容器提供网络连接。在Kubernetes中,可以通过CNI网络插件来为Pod提供网络连接。
目前市面上主流的CNI网络插件有以下几种:
- Flannel:使用VXLAN技术实现网络隔离和扁平化IP;
- Calico:采用BGP协议实现高效的容器网络互连;
- Weave Net:使用虚拟机间通信(VXLAN)技术,在容器之间创建多层网络;
- Canal:结合Flannel和Calico两种CNI网络插件的优点,实现网络隔离和BGP路由。
四.常见的几种CNI网络插件对比
下面我们来对比这几种CNI网络插件。
CNI网络插件 | 优点 | 缺点 | 是否支持网络策略 |
---|---|---|---|
Flannel | 部署简单,性能优秀 | 网络层延迟高 | 否 |
Calico | 性能最好,支持容器内BGP协议,支持网络策略 | 配置复杂 | 是 |
Weave Net | 功能强大,跨平台支持 | 性能低下,容易出现网络死锁 | 是 |
Canal | 结合了Flannel和Calico两种插件的优点,支持多种网络模式,可以满足不同的需求 | 部署和配置较为繁琐 | 是 |
综上所述,每种CNI网络插件都有其独特的优势和局限性,需要根据实际情况进行选择。
五.Calico网络之间是如何通信的
Calico是一种基于IP路由技术的CNI网络插件,它利用BGP协议来实现高效的容器网络互连。在Calico中,每个容器都被赋予了一个唯一的IP地址,这些IP地址在网络层面上是可达的,并且是通过数据包路由直接到达目标容器的。
Calico使用路由表来管理容器网络,每个主机上都会存在一个Calico Agent,它会监听Kubernetes API服务器,从而了解集群中所有容器的IP地址和状态。当某个容器需要向其他容器发起请求时,Calico会根据路由表信息进行查找,找到合适的路径,并将数据包转发给目标容器。
六.配置calico让物理机A上的docker容器c1可以访问物理机B上的docker容器c2
现在要解决的问题是:让物理机A上的docker容器c1可以访问物理机B上的docker容器c2!
方法一:物理机A上的容器c1想和物理机B上的容器c2通信,可以通过容器c1在物理机上映射一个端口,容器c2在物理机上映射一个端口,访问物理机的端口达到访问容器的目的,但是这样过于麻烦,有没有更好的方法呢?
方法二:可以通过网络插件来实现这个需求,这里使用calico网络插件。
6.1 安装部署etcd集群
因为calico的信息保存在etcd里,所以需要一套etcd集群。
查看etcd集群的健康状态。
[root@etcd1 ~]# etcdctl cluster-health member 341a3c460c1c993a is healthy: got healthy result from http://192.168.110.131:2379 member 4679fe0fcb37326d is healthy: got healthy result from http://192.168.110.132:2379 member ab23bcc86cf3190b is healthy: got healthy result from http://192.168.110.133:2379 cluster is healthy
查看etcd集群的成员,可以看到etcd133是Leader。
[root@etcd1 ~]# etcdctl member list 341a3c460c1c993a: name=etcd131 peerURLs=http://192.168.110.131:2380 clientURLs=http://192.168.110.131:2379,http://localhost:2379 isLeader=false 4679fe0fcb37326d: name=etcd132 peerURLs=http://192.168.110.132:2380 clientURLs=http://192.168.110.132:2379,http://localhost:2379 isLeader=false ab23bcc86cf3190b: name=etcd133 peerURLs=http://192.168.110.133:2380 clientURLs=http://192.168.110.133:2379,http://localhost:2379 isLeader=true
etcd现在什么数据也没有
[root@etcd1 ~]# etcdctl ls /
6.2 安装部署docker
三个节点安装docker,用来启动容器。
[root@etcd1 ~]# yum -y install docker-ce [root@etcd2 ~]# yum -y install docker-ce [root@etcd3 ~]# yum -y install docker-ce
修改docker的启动参数,设置docker使用etcd来存储数据,可以看到docker的启动脚本在/usr/lib/systemd/system/docker.service。
[root@etcd1 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: https://docs.docker.com [root@etcd1 ~]# etcdctl member list 341a3c460c1c993a: name=etcd131 peerURLs=http://192.168.110.131:2380 clientURLs=http://192.168.110.131:2379,http://localhost:2379 isLeader=false 4679fe0fcb37326d: name=etcd132 peerURLs=http://192.168.110.132:2380 clientURLs=http://192.168.110.132:2379,http://localhost:2379 isLeader=false ab23bcc86cf3190b: name=etcd133 peerURLs=http://192.168.110.133:2380 clientURLs=http://192.168.110.133:2379,http://localhost:2379 isLeader=true
添加启动参数:--cluster-store=etcd://192.168.110.133:2379。
[root@etcd1 ~]# vim /usr/lib/systemd/system/docker.service [root@etcd1 ~]# grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock
重新加载配置,启动docker。
[root@etcd1 ~]# systemctl daemon-reload ;systemctl restart docker
可以看到参数添加成功:/usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock。这样设置之后,etcd就可以存储docker的后端数据了。
[root@etcd1 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since 三 2022-02-16 15:39:50 CST; 39s ago Docs: https://docs.docker.com Main PID: 1390 (dockerd) Memory: 30.8M CGroup: /system.slice/docker.service └─1390 /usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock
其他两个节点进行相同操作,但是etcd的IP要修改为本机的地址。
[root@etcd2 ~]# vim /usr/lib/systemd/system/docker.service [root@etcd2 ~]# grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.131:2379 -H fd:// --containerd=/run/containerd/containerd.sock [root@etcd2 ~]# systemctl daemon-reload ;systemctl restart docker [root@etcd2 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since 三 2022-02-16 15:39:57 CST; 41s ago Docs: https://docs.docker.com Main PID: 1348 (dockerd) Memory: 32.4M CGroup: /system.slice/docker.service └─1348 /usr/bin/dockerd --cluster-store=etcd://192.168.110.131:2379 -H fd:// --containerd=/run/containerd/containerd.sock [root@etcd3 ~]# vim /usr/lib/systemd/system/docker.service [root@etcd3 ~]# grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.132:2379 -H fd:// --containerd=/run/containerd/containerd.sock [root@etcd3 ~]# systemctl daemon-reload ;systemctl restart docker [root@etcd3 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since 三 2022-02-16 15:39:59 CST; 41s ago Docs: https://docs.docker.com Main PID: 1355 (dockerd) Memory: 34.7M CGroup: /system.slice/docker.service └─1355 /usr/bin/dockerd --cluster-store=etcd://192.168.110.132:2379 -H fd:// --containerd=/run/containerd/containerd.sock
6.3 配置calico
创建calico目录,并创建配置文件,三个节点都需要。
[root@etcd1 ~]# mkdir /etc/calico [root@etcd1 ~]# cat > /etc/calico/calicoctl.cfg <<EOF > apiVersion: v1 > kind: calicoApiConfig > metadata: > spec: > datastoreType: "etcdv2" > etcdEndpoints: "http://192.168.110.133:2379" > EOF #calico的配置文件已经配置好了 [root@etcd1 ~]# cat /etc/calico/calicoctl.cfg apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.110.133:2379" [root@etcd2 ~]# mkdir /etc/calico [root@etcd2 ~]# cat > /etc/calico/calicoctl.cfg <<EOF > apiVersion: v1 > kind: calicoApiConfig > metadata: > spec: > datastoreType: "etcdv2" > etcdEndpoints: "http://192.168.110.131:2379" > EOF [root@etcd2 ~]# cat /etc/calico/calicoctl.cfg apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.110.131:2379" [root@etcd3 ~]# mkdir /etc/calico [root@etcd3 ~]# cat > /etc/calico/calicoctl.cfg <<EOF > apiVersion: v1 > kind: calicoApiConfig > metadata: > spec: > datastoreType: "etcdv2" > etcdEndpoints: "http://192.168.110.132:2379" > EOF [root@etcd3 ~]# cat /etc/calico/calicoctl.cfg apiVersion: v1 kind: calicoApiConfig metadata: spec: datastoreType: "etcdv2" etcdEndpoints: "http://192.168.110.132:2379"
创建目录放置calico镜像和工具。
[root@etcd1 ~]# mkdir etcd-calico [root@etcd1 ~]# cd etcd-calico/
calicoctl是calico命令行工具,calico-node-v2.tar是calico-node镜像包。
[root@etcd1 etcd-calico]# ls calicoctl calico-node-v2.tar
其他两个节点也需要这两个文件
[root@etcd1 etcd-calico]# scp ./* etcd2:/root/etcd-calico/ root@etcd2's password: calicoctl 100% 31MB 98.1MB/s 00:00 calico-node-v2.tar 100% 269MB 29.9MB/s 00:09 [root@etcd1 etcd-calico]# scp ./* etcd3:/root/etcd-calico/ root@etcd3's password: calicoctl 100% 31MB 96.3MB/s 00:00 calico-node-v2.tar 100% 269MB 67.3MB/s 00:04
给calicoctl赋予可执行权限
[root@etcd1 etcd-calico]# chmod +x calicoctl [root@etcd1 etcd-calico]# mv calicoctl /bin/
加载镜像
[root@etcd1 etcd-calico]# docker load -i calico-node-v2.tar df64d3292fd6: Loading layer [==================================================>] 4.672MB/4.672MB d6f0e85be2d0: Loading layer [==================================================>] 8.676MB/8.676MB c9818c503193: Loading layer [==================================================>] 250.9kB/250.9kB 1f748fca5871: Loading layer [==================================================>] 4.666MB/4.666MB 714c5990d9e8: Loading layer [==================================================>] 263.9MB/263.9MB Loaded image: quay.io/calico/node:v2.6.12
另外两个节点也是相同的操作。
[root@etcd2 ~]# mkdir etcd-calico [root@etcd2 ~]# cd etcd-calico/ [root@etcd2 etcd-calico]# pwd /root/etcd-calico [root@etcd2 etcd-calico]# ls calicoctl calico-node-v2.tar [root@etcd2 etcd-calico]# chmod +x calicoctl [root@etcd2 etcd-calico]# mv calicoctl /bin/ [root@etcd2 etcd-calico]# docker load -i calico-node-v2.tar [root@etcd3 ~]# mkdir etcd-calico [root@etcd3 ~]# cd etcd-calico/ [root@etcd3 etcd-calico]# ls calicoctl calico-node-v2.tar [root@etcd3 etcd-calico]# chmod +x calicoctl [root@etcd3 etcd-calico]# mv calicoctl /bin/ [root@etcd3 etcd-calico]# docker load -i calico-node-v2.tar
三个节点上都启动Calico node
[root@etcd1 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg Running command to load modules: modprobe -a xt_set ip6_tables ...... Running the following command to start calico-node: docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=etcd1 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://192.168.110.133:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12 Image may take a short time to download if it is not available locally. Container started, checking progress logs. 2022-02-16 08:00:06.363 [INFO][9] startup.go 173: Early log level set to info ...... 2022-02-16 08:00:06.536 [INFO][14] client.go 202: Loading config from environment Starting libnetwork service Calico node started successfully
每个节点都建立了一个calico-node容器
[root@etcd1 etcd-calico]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ac7d48a378b6 quay.io/calico/node:v2.6.12 "start_runit" 57 seconds ago Up 56 seconds calico-node
另外两个节点也启动Calico node
[root@etcd2 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg [root@etcd2 etcd-calico]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bc99f286802f quay.io/calico/node:v2.6.12 "start_runit" About a minute ago Up About a minute calico-node [root@etcd3 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg [root@etcd3 etcd-calico]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 07ba9ccdcd4d quay.io/calico/node:v2.6.12 "start_runit" About a minute ago Up About a minute calico-node
因为我们是使用etcd保存数据的,可以看到对方的主机的信息。
[root@etcd1 etcd-calico]# calicoctl node status Calico process is running. IPv4 BGP status +-----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +-----------------+-------------------+-------+----------+-------------+ | 192.168.110.131 | node-to-node mesh | up | 08:00:13 | Established | | 192.168.110.132 | node-to-node mesh | up | 08:00:14 | Established | +-----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. [root@etcd2 etcd-calico]# calicoctl node status Calico process is running. IPv4 BGP status +-----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +-----------------+-------------------+-------+----------+-------------+ | 192.168.110.133 | node-to-node mesh | up | 08:00:13 | Established | | 192.168.110.132 | node-to-node mesh | up | 08:00:14 | Established | +-----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. [root@etcd3 etcd-calico]# calicoctl node status Calico process is running. IPv4 BGP status +-----------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +-----------------+-------------------+-------+----------+-------------+ | 192.168.110.133 | node-to-node mesh | up | 08:00:15 | Established | | 192.168.110.131 | node-to-node mesh | up | 08:00:15 | Established | +-----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
查看docker网络类型
[root@etcd1 etcd-calico]# docker network list NETWORK ID NAME DRIVER SCOPE 2db83772936d bridge bridge local 3c0a5a224b09 host host local 422becf3aa3b none null local
创建calico类型的网络,--driver calico 指定使用 calico 的 libnetwork CNM driver。 --ipam-driver calico-ipam 指定使用 calico 的 IPAM driver 管理 IP。 calico 为 global 网络,etcd 会将 calnet1 同步到所有主机。
[root@etcd1 etcd-calico]# docker network create --driver calico --ipam-driver calico-ipam calnet1 735f15b514db3a7310a7f3ef0734a6cd6b966753dc8cf0f7847305e0ba9fe51f
calico 为 global 网络,etcd 会将 calnet1 同步到所有主机。
[root@etcd1 etcd-calico]# docker network list NETWORK ID NAME DRIVER SCOPE 2db83772936d bridge bridge local 735f15b514db calnet1 calico global 3c0a5a224b09 host host local 422becf3aa3b none null local [root@etcd2 etcd-calico]# docker network list NETWORK ID NAME DRIVER SCOPE df0044c9f6f6 bridge bridge local 735f15b514db calnet1 calico global 03b08fa135f8 host host local c19501b7ea7b none null local [root@etcd3 etcd-calico]# docker network list NETWORK ID NAME DRIVER SCOPE 331a6b638487 bridge bridge local 735f15b514db calnet1 calico global 08f90f4840c1 host host local 0d2160ce7298 none null local
6.4 使用Calico实现Docker容器跨主机互联
三个节点拉取busybox镜像用来创建容器
[root@etcd1 etcd-calico]# docker pull busybox [root@etcd2 etcd-calico]# docker pull busybox [root@etcd3 etcd-calico]# docker pull busybox
三个节点上都创建一个容器,指定网络类型为calnet1
[root@etcd1 etcd-calico]# docker run --name c1 --net calnet1 -itd busybox 73359e36becf9859e073ebce9370b83ac36754f40356e53b82a1e2a8cd7b0066 [root@etcd2 etcd-calico]# docker run --name c2 --net calnet1 -itd busybox 28d27f3effb0ea15e6f5e6cca9e8982c68d24f459978098967842242478b6d8b [root@etcd3 etcd-calico]# docker run --name c3 --net calnet1 -itd busybox 995241af841f2da4f69c7c3cfa2ce0766de49e7b43ec327f5dc8d57ff7838b62
进入容器c1,查看网卡信息
[root@etcd1 etcd-calico]# docker exec -it c1 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.36.192/32 scope global cali0 valid_lft forever preferred_lft forever / # exit
每在主机上创建一个容器,则会在物理机上创建一张虚拟网卡出来,注意:cali5aa980fa781@if4里的if4是容器里网卡的4,cali0@if5里的5是物理机网卡的5。从这里可以看到容器里的虚拟网卡 cali0 和物理机的 cali5aa980fa781 是 veth pair 关系。
[root@etcd1 etcd-calico]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:1e:33:3e brd ff:ff:ff:ff:ff:ff inet 192.168.110.133/24 brd 192.168.110.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe1e:333e/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:8b:19:bc:63 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 5: cali5aa980fa781@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 9a:3d:aa:d2:bc:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::983d:aaff:fed2:bca2/64 scope link valid_lft forever preferred_lft forever
另外两个节点也是类似的
[root@etcd2 etcd-calico]# docker exec -it c2 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 ...... 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.57.64/32 scope global cali0 valid_lft forever preferred_lft forever / # exit [root@etcd2 etcd-calico]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 ...... valid_lft forever preferred_lft forever 5: cali2e3a79a8486@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether ce:2a:7a:5f:4e:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::cc2a:7aff:fe5f:4e83/64 scope link valid_lft forever preferred_lft forever [root@etcd3 etcd-calico]# docker exec -it c3 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 ...... 4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff inet 192.168.175.64/32 scope global cali0 valid_lft forever preferred_lft forever / # exit [root@etcd3 etcd-calico]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 ...... 5: califd96a41066a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 2e:ca:96:03:96:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::2cca:96ff:fe03:9683/64 scope link valid_lft forever preferred_lft forever
使用route -n查看路由信息:
192.168.57.64 192.168.110.131 255.255.255.192 UG 0 0 0 ens32,表示在容器里ping 192.168.57.64这个地址,都会转发到192.168.110.131这台机器 ;
192.168.57.64 0.0.0.0 255.255.255.255 UH 0 0 0 cali2e3a79a8486,表示目的地址是192.168.57.64的数据包,转发到cali2e3a79a8486这张网卡 。
cali2e3a79a8486和容器里的网卡cali0是veth pair 关系,所以就可以从容器c1访问到容器c2,其他以此类推,calico相当于建立了一个隧道,可以在物理机A的c1容器访问物理机B的c2容器。
[root@etcd1 etcd-calico]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.110.2 0.0.0.0 UG 0 0 0 ens32 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.36.192 0.0.0.0 255.255.255.255 UH 0 0 0 cali5aa980fa781 192.168.36.192 0.0.0.0 255.255.255.192 U 0 0 0 * 192.168.57.64 192.168.110.131 255.255.255.192 UG 0 0 0 ens32 192.168.110.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32 192.168.175.64 192.168.110.132 255.255.255.192 UG 0 0 0 ens32
在容器c1里,ping c2容器可以ping通。
[root@etcd1 etcd-calico]# docker exec -it c1 sh / # ping 192.168.57.64 PING 192.168.57.64 (192.168.57.64): 56 data bytes 64 bytes from 192.168.57.64: seq=0 ttl=62 time=0.578 ms 64 bytes from 192.168.57.64: seq=1 ttl=62 time=0.641 ms 64 bytes from 192.168.57.64: seq=2 ttl=62 time=0.543 ms ^C --- 192.168.57.64 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.543/0.587/0.641 ms / # exit
在物理机上ping不通容器c2。
[root@etcd1 etcd-calico]# ping 192.168.57.64 PING 192.168.57.64 (192.168.57.64) 56(84) bytes of data. ^C --- 192.168.57.64 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms
看下路由的关系:不管目的地是哪里都走 cali0。
[root@etcd1 etcd-calico]# docker exec c1 ip route default via 169.254.1.1 dev cali0 169.254.1.1 dev cali0 scope link
看下etcd1 的路由,目的地址到 192.168.36.192 的数据包都从 cali5aa980fa781(etcd1 新产生的虚拟网卡)走,目的地址到 192.168.57.64/26 网段的数据包都从 ens32 发到 192.168.110.131 上去,每台主机都知道不同的容器在哪台主机上,所以会动态的设置路由。
[root@etcd1 etcd-calico]# ip route default via 192.168.110.2 dev ens32 169.254.0.0/16 dev ens32 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.36.192 dev cali5aa980fa781 scope link blackhole 192.168.36.192/26 proto bird 192.168.57.64/26 via 192.168.110.131 dev ens32 proto bird 192.168.110.0/24 dev ens32 proto kernel scope link src 192.168.110.133 192.168.175.64/26 via 192.168.110.132 dev ens32 proto bird
七.Kubernetes(k8s)环境里的calico
在k8s环境里,每个节点上都有calico-node,calico数据存在etcd里。
[root@k8scloude1 ~]# kubectl get pod -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6b9fbfff44-4jzkj 1/1 Running 55 38d 10.244.251.210 k8scloude3 <none> <none> calico-node-bdlgm 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none> calico-node-hx8bk 1/1 Running 27 38d 192.168.110.128 k8scloude3 <none> <none> calico-node-nsbfs 1/1 Running 27 38d 192.168.110.129 k8scloude2 <none> <none> coredns-545d6fc579-7wm95 1/1 Running 27 38d 10.244.158.121 k8scloude1 <none> <none> coredns-545d6fc579-87q8j 1/1 Running 27 38d 10.244.158.122 k8scloude1 <none> <none> etcd-k8scloude1 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none> kube-apiserver-k8scloude1 1/1 Running 18 27d 192.168.110.130 k8scloude1 <none> <none> kube-controller-manager-k8scloude1 1/1 Running 29 38d 192.168.110.130 k8scloude1 <none> <none> kube-proxy-599xh 1/1 Running 27 38d 192.168.110.128 k8scloude3 <none> <none> kube-proxy-lpj8z 1/1 Running 27 38d 192.168.110.129 k8scloude2 <none> <none> kube-proxy-zxlk9 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none> kube-scheduler-k8scloude1 1/1 Running 29 38d 192.168.110.130 k8scloude1 <none> <none> metrics-server-bcfb98c76-n4fnb 1/1 Running 26 30d 10.244.251.196 k8scloude3 <none> <none>
八.总结
本文介绍了CNI网络插件的概念和常见的几种插件对比,详细讲解了如何使用Calico实现Docker容器跨主机互联。通过使用Calico,我们可以轻松地在Kubernetes集群中实现高效的容器网络互连,提升应用程序的可靠性和可扩展性。
这篇关于使用CNI网络插件(calico)实现docker容器跨主机互联的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-14Docker端口:你真的公开了哪些东西?
- 2024-11-14用DOCKER在家里的实验室里搞些酷炫的玩意儿
- 2024-11-05掌握Docker:高效安全的十大最佳实践
- 2024-11-05在 Docker Compose 中怎么设置端口映射-icode9专业技术文章分享
- 2024-11-05在 Docker Compose 中怎么设置环境变量-icode9专业技术文章分享
- 2024-11-04Docker环境部署项目实战:新手入门教程
- 2024-11-04Docker环境部署资料:新手入门教程
- 2024-11-01Docker环境部署教程:新手入门指南
- 2024-11-01超越Docker:苹果芯片上的模拟、编排和虚拟化方案讲解
- 2024-11-01Docker环境部署:新手入门教程