Nginx+Keepalive搭建负载均衡与高可用

2021/5/7 7:27:08

本文主要是介绍Nginx+Keepalive搭建负载均衡与高可用,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

一、准备

  • 10.117.201.80、10.117.201.81两台物理机

二、安装

  • Nginx安装略过 这里80、81需要安装、参考之前安装文档

  • Keepalived安装、上述两台物理机都要安装

  • 下载keepalived-2.0.15.tar.gz

  • 解压到 /usr/local/目录下

    tar -xf  keepalived-2.0.15.tar.gz
    mv keepalived-2.0.15 keepalived
    cd keepalived
    ./configure
    make
    

    抛出异常

    *** WARNING - this build will not support IPVS with IPv6. Please install libnl/libnl-3 dev libraries to support IPv6 with IPVS.
    

    解决

    yum -y install libnl libnl-devel
    

    重新检查配置 ./configure

    编译 make

    抛出异常错误

     cd . && /bin/sh /data/keepalived-2.0.19/missing automake-1.15 --foreign Makefile
    /data/keepalived-2.0.19/missing: line 81: automake-1.15: command not found
    WARNING: 'automake-1.15' is missing on your system.
             You should only need it if you modified 'Makefile.am' or
             'configure.ac' or m4 files included by 'configure.ac'.
             The 'automake' program is part of the GNU Automake package:
             <http://www.gnu.org/software/automake>
             It also requires GNU Autoconf, GNU m4 and Perl in order to run:
             <http://www.gnu.org/software/autoconf>
             <http://www.gnu.org/software/m4/>
             <http://www.perl.org/>
    make: *** [Makefile.in] Error 127
    

    解决

    yum install automake -y
    autoreconf -ivf
    再次执行make
    make install
    

    查看启动文件

    [root@mon-longi system]# pwd
    /usr/lib/systemd/system
    [root@mon-longi system]# vi keepalived.service
    [Unit]
    Description=LVS and VRRP High Availability Monitor
    After= network-online.target syslog.target
    Wants=network-online.target
    
    [Service]
    Type=forking
    PIDFile=/var/run/keepalived.pid
    KillMode=process
    EnvironmentFile=-/usr/local/etc/sysconfig/keepalived
    ExecStart=/usr/local/sbin/keepalived $KEEPALIVED_OPTIONS
    ExecReload=/bin/kill -HUP $MAINPID
    
    [Install]
    WantedBy=multi-user.target
    

    查看keepalived配置文件

    [root@mon local]# whereis keepalived.conf
    keepalived: /usr/local/sbin/keepalived /usr/local/etc/keepalived /usr/local/keepalived
    [root@mon local]# cd etc/keepalived/
    [root@mon keepalived]# ll
    总用量 8
    -rw-r--r-- 1 root root 3550 4月  20 14:24 keepalived.conf
    drwxr-xr-x 2 root root 4096 4月  20 14:24 samples
    [root@mon keepalived]# vi keepalived.conf
    

    81主节点配置文件解释 keepalived.conf、路径所在/usr/local/etc/keepalived

    ! Configuration File for keepalived
    #全局变量
    global_defs {
       #收件人
       notification_email {
         xxx@163.com
       }
       #邮件发件人
       notification_email_from Alexandre.Cassen@firewall.loc
       #邮件服务器地址
       smtp_server 192.168.200.1
       #超时时间
       smtp_connect_timeout 30
       #路由标识此处注意router_id为负载均衡标识,在局域网内应该是唯一的
       router_id lvs_81
    }
    
    vrrp_script check_ngx { #方法,需要与VI_1中的track_script中的方法名对应
       script "/usr/local/etc/keepalived/check_ngx.sh" #执行文件的位置              
       interval 1  #时间间隔                
    }
    #虚拟路由的标识符
    vrrp_instance VI_1 {
        #状态只有MASTER和BACKUP两种,并且要大写,MASTER为工作状态、BACKUP为备用状态         
        state MASTER
        #通信所使用的网络接口
        interface eth0
        #虚拟路由的ID号,是虚拟路由MAC的最后一位地址
        virtual_router_id 51
        #此节点的优先级、主节点的优先级需要比其他节点高
        priority 100
        #通告的时间间隔
        advert_int 1
        #认证配置
        authentication {
            #认证方式
            auth_type PASS
            #认证密码
            auth_pass 1111
        }
        #虚拟ip地址、可以有多个,每个占用一行、不需要子网掩码,同时这个ip 必须与我们在lvs 客户端设定的vip 相一致    
        virtual_ipaddress {
            10.117.201.88/32 --自己设置的
        }
        track_script { #配置需要执行的方法
          check_ngx   #方法名
        }
    }
    }
    
    

    80备节点keepalived.conf配置文件解析

    ! Configuration File for keepalived
    #全局变量
    global_defs {
       #收件人
       notification_email {
         xxx@163.com
       }
       #邮件发件人
       notification_email_from Alexandre.Cassen@firewall.loc
       #邮件服务器地址
       smtp_server 192.168.200.1
       #超时时间
       smtp_connect_timeout 30
       #路由标识此处注意router_id为负载均衡标识,在局域网内应该是唯一的
       router_id lvs_80
     
    }
    vrrp_script check_ngx { #方法,需要与VI_1中的track_script中的方法名对应
       script "/usr/local/etc/keepalived/check_ngx.sh" #执行文件的位置            
       interval 1  #时间间隔                
    }
    
    #虚拟路由的标识符
    vrrp_instance VI_1 {
        #状态只有MASTER和BACKUP两种,并且要大写,MASTER为工作状态、BACKUP为备用状态         
        state BACKUP
        #通信所使用的网络接口
        interface eth0
        #虚拟路由的ID号,是虚拟路由MAC的最后一位地址
        virtual_router_id 51
        #此节点的优先级、主节点的优先级需要比其他节点高
        priority 50
        #通告的时间间隔
        advert_int 1
        #认证配置
        authentication {
            #认证方式
            auth_type PASS
            #认证密码
            auth_pass 1111
        }
        #虚拟ip地址、可以有多个,每个占用一行、不需要子网掩码,同时这个ip 必须与我们在lvs 客户端设定的vip 相一致    
        virtual_ipaddress {
            10.117.201.88/32
        }
        track_script { #配置需要执行的方法
          check_ngx   #方法名
        }
    }
    }
    
    

    启动看效果

    [root@mon keepalived]# systemctl start keepalived.service
    [root@mon keepalived]# ps -ef|grep keep
    root      609038       1  0 11:00 ?        00:00:00 /usr/local/sbin/keepalived -f /usr/local/etc/keepalived/keepalived.conf -D -S 0
    root      609039  609038  0 11:00 ?        00:00:00 /usr/local/sbin/keepalived -f /usr/local/etc/keepalived/keepalived.conf -D -S 0
    root      609040  609038  0 11:00 ?        00:00:01 /usr/local/sbin/keepalived -f /usr/local/etc/keepalived/keepalived.conf -D -S 0
    root      645454  639793  0 11:55 pts/1    00:00:00 grep --color=auto keep
    

    查看虚拟ip是否生成

    [root@mon keepalived]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:4d:d6:c0 brd ff:ff:ff:ff:ff:ff
        inet 10.117.201.81/24 brd 10.117.201.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 10.117.201.88/32 scope global eth0 --虚拟ip已经生成
           valid_lft forever preferred_lft forever
        inet6 fe80::5054:ff:fe4d:d6c0/64 scope link 
           valid_lft forever preferred_lft forever
    

    修改备节点的配置文件,启动,不会生成虚拟IP,如果生成了那就是配置文件出现了错误,备节点和主节点争夺IP资源,这个现象叫做“裂脑”。

    [root@mon-longi keepalived]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:a1:fd:3f brd ff:ff:ff:ff:ff:ff
        inet 10.117.201.80/24 brd 10.117.201.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::5054:ff:fea1:fd3f/64 scope link 
           valid_lft forever preferred_lft forever
    

    进行高可用的主备服务器切换实验

    先停掉主节点的keepalived服务、查看备节点会不会生成虚拟IP:10.117.201.88/32

    主节点运行命令
    [root@mon ~]# systemctl stop keepalived.service
    备节点查看虚拟ip
    [root@mon-longi keepalived]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:a1:fd:3f brd ff:ff:ff:ff:ff:ff
        inet 10.117.201.80/24 brd 10.117.201.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 10.117.201.88/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::5054:ff:fea1:fd3f/64 scope link 
           valid_lft forever preferred_lft forever
    

    上面可以看到备节点确实生成了虚拟ip、然后启动主节点、看主节点和备节点的虚拟ip、主节点应该会争夺回来。

    启动主节点
    [root@mon ~]# systemctl start keepalived.service
    查看主节点虚拟ip
    [root@mon ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:4d:d6:c0 brd ff:ff:ff:ff:ff:ff
        inet 10.117.201.81/24 brd 10.117.201.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 10.117.201.88/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::5054:ff:fe4d:d6c0/64 scope link 
           valid_lft forever preferred_lft forever
    查看备节点虚拟ip
    [root@mon-longi keepalived]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:a1:fd:3f brd ff:ff:ff:ff:ff:ff
        inet 10.117.201.80/24 brd 10.117.201.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::5054:ff:fea1:fd3f/64 scope link 
           valid_lft forever preferred_lft forever
    

    上面主节点确实把IP给争夺回来了。

三、结合nginx实现高可用

​ Nginx配置:以上两台物理机都要配置Nginx、去到Nginx的安装目录、80、81两台机器安装目录在/application下面、修改nginx.conf文件。

[root@mon ~]# cd /application
[root@mon application]# ls
nginx  nginx-1.16.1
[root@mon application]# cd nginx-1.16.1/
[root@mon nginx-1.16.1]# ll
总用量 0
drwx------  2 root root   6 4月  20 10:19 client_body_temp
drwxr-xr-x  2 root root 333 4月  20 17:16 conf
drwx------  2 root root   6 4月  20 10:19 fastcgi_temp
drwxr-xr-x  2 root root  40 4月  20 10:15 html
drwxr-xr-x  2 root root  58 4月  21 11:00 logs
drwx------ 12 root root  96 4月  20 17:22 proxy_temp
drwxr-xr-x  2 root root  19 4月  20 10:15 sbin
drwx------  2 root root   6 4月  20 10:19 scgi_temp
drwx------  2 root root   6 4月  20 10:19 uwsgi_temp
[root@mon nginx-1.16.1]# vi conf/nginx.conf
user  root;
worker_processes  auto;
error_log  logs/error.log  error;
pid        logs/nginx.pid;

events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  logs/access.log  main;
    sendfile        on;
    tcp_nopush     on;
    tcp_nodelay   on;   
    keepalive_timeout  65;
    upstream lvs_test {
        server 10.117.201.82:80 weight=1;
        server 10.117.201.83:80 weight=2;
    }

    server {
        listen       80;
        server_name  localhost;
        #access_log  logs/host.access.log  main;

        location / {
            proxy_set_header Host $host;
            proxy_pass http://lvs_test;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

两台机器都修改一样的。

配置nginx检测脚本 check.ngx.sh、在/usr/local/etc/keepalived/目录下 vi check.ngx.sh

#!/bin/bash
A=`ps -C nginx --no-header |wc -l`
if [ $A -eq 0 ];then
 /application/nginx-1.16.1/sbin/nginx

 if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        systemctl stop keepalived.service
 fi
fi
#if [ `ps aux | grep nginx | grep -v grep | wc -l` -ne 17 ];then
#systemctl stop keepalived.service
#fi

两台机器都是一样的配置,只不过注释部分的 -ne 17 代表着这台机器的CPU核数总和+1。80是8+1,81是16+1.其实跟nginx.conf文件中worker_processes auto参数有关,设置成auto,将会根据CPU核数去开启work 线程。

Nginx高可用主要是保证Nginx挂了能够马上重新起来。如果Nginx实在起不来,那就要关闭keepalived.。实现故障转移。

我们访问一下配置的虚拟IP路径。

http://10.117.201.88/

我们测试一下,将81主节点的nginx人为关闭。看看我们的自检脚本是否生效。

[root@mon ~]# ps -ef|grep nginx
root      609046       1  0 11:00 ?        00:00:00 nginx: master process /application/nginx-1.16.1/sbin/nginx
root      609047  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609048  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609050  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609051  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609052  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609054  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609056  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609057  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609058  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609059  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609060  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609061  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609062  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609063  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609064  609046  0 11:00 ?        00:00:00 nginx: worker process
root      609065  609046  0 11:00 ?        00:00:00 nginx: worker process
root      755929  755666  0 14:14 pts/1    00:00:00 grep --color=auto nginx
[root@mon ~]# /application/nginx-1.16.1/sbin/nginx -s stop
[root@mon ~]# ps -ef|grep nginx
root      756417       1  0 14:14 ?        00:00:00 nginx: master process /application/nginx-1.16.1/sbin/nginx
root      756418  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756419  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756421  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756422  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756424  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756426  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756427  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756428  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756429  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756430  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756431  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756432  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756433  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756434  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756435  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756436  756417  0 14:14 ?        00:00:00 nginx: worker process
root      756482  755666  0 14:14 pts/1    00:00:00 grep --color=auto nginx

可见马上关闭,马上生效了。因为心跳检测是每隔一秒检测一次,也就是说,脚本每隔一秒运行一次。

补充一下命令:

systemctl stop keepalived.service 停止keepalived服务
systemctl start keepalived.service 开启keepalived服务
systemctl restart keepalived.service 重启keepalived服务 
systemctl status keepalived.service 查看keepalived服务状态
/application/nginx-1.16.1/sbin/nginx -s stop 停止nginx服务
/application/nginx-1.16.1/sbin/nginx 启动nginx服务
/application/nginx-1.16.1/sbin/nginx -s reload 重启nginx服务

常见错误参考:解决Keepalived启动“Fail to start LVS and VRRP Avaliability Monitor” 问题



这篇关于Nginx+Keepalive搭建负载均衡与高可用的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程