HA Proxy using VIP and keepalived
Abstract
This post discusses how to leverage keepalived
features to proxy request(s) (both internal and external) with only 2 proxy servers, without forfeiting high availability.
Prerequisites
2 installed CentOS with NginX server, a spare LAN IP, and a spare WAN IP from your cloud service.
Deployment
1. Install keepalived (if not already present):
yum install keepalived
2. Bind IP which not defined in system (kernel level)
This step help kernel understand that a interface can have 2 ip address.
Add this to /etc/sysctl.conf
:
net.ipv4.ip_nonlocal_bind = 1
Force sysctl to apply new setting:
sysctl -p
3. Configure keepalived
at BOTH proxy
Edit /etc/keepalived/keepalived.conf
with the content below (there is a default config file when installed, ignore it):
vrrp_sync_group VG_1 { group { WAN_1 } group { LAN_1 } } vrrp_instance WAN_1 { #master WAN #just a name state MASTER # BACKUP in other proxy interface eth0 virtual_router_id 3 dont_track_primary #LOWER is SLAVE priority 90 # should be <90 in other proxy preempt_delay 30 garp_master_delay 1 advert_int 2 authentication { auth_type PASS auth_pass yourpass } track_interface { eth2 } virtual_ipaddress { 130.65.156.140/24 dev eth2 } } vrrp_instance LAN_1 { #backup LAN #just a name state BACKUP #MASTER in other proxy interface eth0 virtual_router_id 4 dont_track_primary #LOWER is SLAVE priority 80 # should be >90 in other proxy preempt_delay 30 garp_master_delay 1 advert_int 2 authentication { auth_type PASS auth_pass yourpass } track_interface { eth1 } virtual_ipaddress { 10.5.247.10/24 dev eth1 } }
4. Set BOOTPROTO=”static” in both 4 interface.
In some cloud environment, there is a periodically restart on interface (still get same IP address), but it would not start keepalived service if it’s dynamic in WAN interface
5. set chkconfig keepalived on (auto service when booted)
6. use ip addr show <interface> (with eth1 or eth0 in both proxy to check status)
Explanation
- there are maximum 255 allowed instances on a proxy using vrrp
- instance WAN_1 consider proxy 1 as master of WAN traffic (any request to 130.65.156.140) will be proxied to web1 and web2 (based on nginx configuration of upstream, there are several load-balancing mechanism to apply (ip hash, round-robin, least-connect). if proxy 1 fail over, proxy 2 will take Master role of WAN traffic and proxied.
- instance LAN_1 consider proxy 2 as master LAN traffic (DB request/return in this scenario – to 10.5.247.10)
- either one of proxy is fail over (completely off in all interface), another will take role Master both LAN and WAN traffic, which mean eliminate Single Point Failure
- flow of traffic:
- http request from user will point to VIP WAN –> e2 (proxy 1) default –> load balancing to both web server through e1 (proxy 1)
- e1 (proxy 2) ALWAYS ready to forward load balancing http to both web server, but there is no input traffic to e2 (proxy 2) until it claims VIP WAN; therefore e1 (proxy 2) is in idle to forward http traffic to both webs
- db request from web server will point to VIP LAN –> e1 (proxy 2) default–> load balancing to both db server through e1 (proxy 2)
- e1 (proxy 1) ALWAYS ready to forward load balance to both db server, but there is no input db traffic to e1 (proxy 1) until it claims VIP LAN
- Important parameter(s):
- interface : interface to use exchange packet of vrrp protocols for every instances. should be local ethernet interface both case
- priority: lower is BACKUP, higher is MASTER for each vrrp instance
- dont_track_primary: we use local ethernet interface to exchange vrrp information and need another interface to healthcheck other side interface (track_interface parameter). Checking primary interface health can cause an issue due sleep state of interface (but not completely fail over)
- virtual_ipaddress : the address should correspondence master interface take
- Discussion: How about only e1 (proxy 1) fail over (other interfaces still work)? and solution ? Please leave your comment