一、前言:
在互联网上面,网站为用户提供原始的内容访问,同时为用户提供交互操作。提供稳定可靠的服务,可以给用户带来良好的体验,保证用户的正常访问使用,在网站的可靠性方面,有很多的技术可以来提供,这类技术可以分为:
高可用:保证服务的可靠,稳定,实现故障的屏蔽,避免了单点故障。
高性能:多台服务器连接起来,处理一个复杂的计算问题。
负载均衡:将用户请求引导到后端多台服务器,实现服务器请求的负载。
我们将这类技术称之为集群负载均衡,可以提供负载均衡和高可用的有硬件和软件,软件方面有haproxy,lvs,keepalived,nginx,heartbeat,corosync等等,而这里我们采用的是 nginx-keepalived 来构建。
Nginx有很强的代理功能,但是一台nginx 就形成了单点,现在使用keepalived来解决这个问题,keepalived可以实现故障转移切换,实现后端的健康检查,前端的高可用,使网站故障记录大大降低,避免了单点故障造成网站无法访问的问题,确保了网站业务的正常运行。
二、Nginx+keepalived有两种配置方案:
2.1、Nginx+keepalived 主从配置
这种方案,使用一个vip地址,前端使用2台机器,一台做主,一台做备,但同时只有一台机器工作,另一台备份机器在主机器不出现故障的时候,永远处于浪费状态,对于服务器不多的网站,该方案不经济实惠,所以本次不予采用。
2.2、Nginx+keepalived 双主配置
这种方案,使用两个vip地址,前端使用2台机器,互为主备,同时有两台机器工作,当其中一台机器出现故障,两台机器的请求转移到一台机器负担,非常适合于当前架构环境,故本次采用此方案对网站进行高可用架构。
三、Nginx+keepalived 主从配置
3.1、Nginx+keepalived 主从配置详情请见http://kling.blog.51cto.com/3320545/1240359
这里不做重点介绍。
四、Ningx+Keepalived 双主配置
4.1、拓扑结构
4.2、测试环境如下:
系统:Ceentos 6.4 64 位
前端 node1 服务器:
DIP: 192.168.122.2
VIP: 192.168.122.22
前端 node2 服务器:
DIP: 192.168.122.3
VIP:192.168.122.23
后端服务器:
web server01:192.168.122.4
web server02:192.168.122.5
web server03:192.168.122.6
4.3、软件安装
分别在两台前端服务器上安装nginx+keepalived,使用脚本如下:
#!/bin/bash
# author: kuangl
# mail: kuangl@orient-media.com
# description: The installation of Nginx files.
# -------------------------------------------------------- #
## Nginx_install
# -------------------------------------------------------- #
# Nginx installation
#CURRENT_PATH=$(pwd)
for i in $(rpm -q gcc gcc-c++ kernel-devel openssl-devel zlib-devel popt-devel popt-static libnl-devel wget make |grep 'not installed' | awk '{print $2}')
do
yum -y install $i
done
[ -d /root/software ]
[ "$?" != 0 ] && mkdir /root/software
cd /root/software
[ ! -e pcre-8.33.tar.gz ] && wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.33.tar.gz
tar -zxvf pcre-8.33.tar.gz
cd pcre-8.33
./configure
make && make install
echo $? || [ $? != 0 ] || echo " installation pcre failed" || exit 1
cd /root/software
[ ! -e nginx-1.2.9.tar.gz ] && wget http://nginx.org/download/nginx-1.2.9.tar.gz
tar -zxvf nginx-1.2.9.tar.gz
cd nginx-1.2.9
./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_sub_module --with-http_stub_status_module --with-http_gzip_static_module
make && make install
echo $? || [ $? != 0 ] || echo " installation nginx failed" || exit 1
# -------------------------------------------------------- #
## Keepalived_intsall
# -------------------------------------------------------- #
# Keepalived installation
cd /root/softwarae
[ ! -e keepalived-1.2.4.tar.gz ] && wget http://www.keepalived.org/software/keepalived-1.2.4.tar.gz
tar -zxvf keepalived-1.2.4.tar.gz
cd keepalived-1.2.4
ln -s /usr/src/kernels/$(uname -r) /usr/src/kernels/linux
./configure --prefix=/usr --bindir=/usr/bin --sbindir=/usr/bin --libexecdir=/usr/libexec --localstatedir=/var --libdir=/lib64 --infodir=/usr/share/info --sysconfdir=/etc --mandir=/usr/local/share/man --with-kernel-dir=/usr/src/kernels/linux
make && make install
echo $? || [ $? != 0 ] || print " installation keepalived failed" || exit 1
chkconfig --add keepalived
chkconfig --level 345 keepalived on
4.4、在后端服务器上安装 apached
后端 node4
[root@node4 ~]# yum -y install httpd
[root@node4 html]# echo "this is 192.168.122.4" > /var/www/htmlindex.html
[root@node4 ~]# service httpd start
[root@node4 html]# curl 192.168.122.4
this is 192.168.122.4
后端 node5
[root@node5 ~]# yum -y install httpd
[root@node5 html]# echo "this is 192.168.122.5" > /var/www/htmlindex.html
[root@node5 ~]# service httpd start
[root@node5 html]# curl 192.168.122.5
this is 192.168.122.5
后端 node6
[root@node6 ~]# yum -y install httpd
[root@node6 html]# echo "this is 192.168.122.6" > /var/www/htmlindex.html
[root@node6 ~]# service httpd start
[root@node6 html]# curl 192.168.122.6
this is 192.168.122.6
4.5、node2、node3 上配置 nginx
[root@node2 ~]# vim /usr/local/nginx/conf/nginx.conf
upstream web1 ##定义负载均衡组为 web1
{
ip_hash;
server 192.168.122.4:80;
server 192.168.122.5:80;
server 192.168.122.6:80;
}
server {
listen 80;
server_name dev.test01.com;
location /
{
root /home/kuangl/;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://web1;
}
}
4.6、在 node2 上配置 keepalived
[root@node2 conf]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
404060945@qq.com
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 200
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass kuanglnginx
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.122.22
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 251
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass kuangl
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.122.23
}
}
4.7、在 node3 上配置 keepalived
! Configuration File for keepalived
global_defs {
notification_email {
404060945@qq.com
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 200
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass kuanglnginx
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.122.22
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 251
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass kuangl
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.122.23
}
}
4.8、在两台双主服务器上添加自动检测脚本
#!/bin/bash
# description:
# 定时查看 nginx 是否存在,如果不存在则启动 nginx
# 如果启动失败,则停止 keepalived
status=$(ps -C nginx --no-heading|wc -l)
if [ "${status}" = "0" ]; then
/usr/local/nginx/sbin/nginx
status2=$(ps -C nginx --no-heading|wc -l)
if [ "${status2}" = "0" ]; then
/etc/init.d/keepalived stop
fi
fi
4.9、开启 nginx、keepalived 服务
[root@node2 ~]# service keepalived start
[root@node2 ~]# /usr/local/nginx/sbin/nginx
[root@node3 ~]# service keepalived start
[root@node3 ~]# /usr/local/nginx/sbin/nginx
4.10、用 ip a 查看 VIP
4.11、测试访问
[kuangl@node01 ~]$ curl http://192.168.122.22
this is 192.168.122.6
[kuangl@node01 ~]$ curl http://192.168.122.22
this is 192.168.122.4
[kuangl@node01 ~]$ curl http://192.168.122.22
this is 192.168.122.5
[kuangl@node01 ~]$ curl http://192.168.122.23
this is 192.168.122.6
[kuangl@node01 ~]$ curl http://192.168.122.23
this is 192.168.122.4
[kuangl@node01 ~]$ curl http://192.168.122.23
this is 192.168.122.5
五、后端用 rsync 做数据同步
node5-node6 上配置进程模式,以 node5 为例
[root@node5 ~]# yum -y install rsync
[root@node5 ~]# vim /etc/rsynsd.conf
uid = root
gid = root
use chroot = no
max connections = 5
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
[web01]
path=/home/kuangl/
comment = update
ignore errors
read only = no
list = no
hosts allow = 192.168.122.0/24
auth users = root
uid = root
gid = root
secrets file = /etc/rsyncd.secrets
[root@node5 ~]# vim /etc/rsyncd.secrets
root:123456
[root@node5 ~]# chmod 0600 /etc/rsyncd.secrets
[root@node5 ~]# ll /etc/rsyncd.secrets
-rw-------. 1 root root 12 Jul 20 19:41 /etc/rsyncd.secrets
[root@node5 ~]# rsync --daemon
[root@node5 ~]# echo "rsync --daemon" >> /etc/rc.local
node4 上配置命令模式:
[root@node4 ~]# yum -y install rsync
[root@node4 ~]# vim /etc/rsyncd.secrets
123456
[root@node4 ~]# chmod 0600 /etc/rsyncd.secrets
root@node4 kuangl]# rsync -vzrtopg --delete --progress --password-file=/etc/rsyncd.secrets rsync+inotify root@192.168.122.5::web01
sending incremental file list
rsync+inotify/
rsync+inotify/inotify-tools-3.14.tar.gz
358772 100% 1.85MB/s 0:00:00 (xfer#1, to-check=2/4)
rsync+inotify/rsync+inotify_client.sh
617 100% 3.11kB/s 0:00:00 (xfer#2, to-check=1/4)
rsync+inotify/rsync+inotify_server.sh
900 100% 4.03kB/s 0:00:00 (xfer#3, to-check=0/4)
sent 360679 bytes received 69 bytes 240498.67 bytes/sec
total size is 360289 speedup is 1.00
查看结果
[root@node5 ~]# cd /home/kuangl/
[root@node5 kuangl]# ll
total 8
-rw-r--r--. 1 root root 22 Jul 20 15:16 index.html
drwxr-xr-x. 2 root root 4096 Nov 11 2012 rsync+inotify
来源:https://blog.51cto.com/kling/1253474