High Availability with HAProxy and Keepalived: A Practical Guide

High Availability with HAProxy and Keepalived: A Practical Guide

Home Lab, Infrastructure

When it comes to building resilient and scalable infrastructure, two tools often come up together: HAProxy and Keepalived. While HAProxy excels at load balancing and proxying, Keepalived ensures service continuity by managing failover and redundancy. Together, they form a rock-solid foundation for highly available systems.


What is HAProxy?

HAProxy (High Availability Proxy) is an open-source load balancer and reverse proxy widely used in production environments. It sits between clients and backend servers, distributing traffic efficiently and providing features such as:

  • Layer 4 and Layer 7 load balancing (TCP/HTTP/HTTPS)
  • SSL termination
  • Health checks for backend servers
  • Sticky sessions
  • Advanced routing and ACLs
  • Logging and metrics integration

Because of its performance and flexibility, HAProxy is used by some of the largest web platforms in the world.


What is Keepalived?

Keepalived is an open-source tool that provides high availability and failover capabilities for Linux systems. It primarily uses VRRP (Virtual Router Redundancy Protocol) to create a floating IP address shared between servers. If the primary node goes down, Keepalived automatically promotes a standby node to take over the virtual IP.

Key features include:

  • VRRP-based failover
  • Health checks for services and nodes
  • Automatic failover with minimal downtime
  • Lightweight and easy to configure

Why Use HAProxy and Keepalived Together?

Using HAProxy alone gives you load balancing across your backend servers. But what happens if the HAProxy server itself fails? That’s where Keepalived comes in.

With HAProxy + Keepalived, you get:

  1. Load balancing and traffic management (via HAProxy).
  2. High availability of the load balancer itself (via Keepalived).
  3. A floating virtual IP that clients connect to, which always points to the active HAProxy node.
  4. Failover with minimal service disruption when a node fails.

This setup prevents a single point of failure in your load balancing layer.


Typical Architecture

Here’s a simplified setup:

      +---------------------+
Client ---> VIP:80/443 (Managed by Keepalived)
                 |
     -----------------------------
     |                           |
+------------+             +------------+
| HAProxy #1 |             | HAProxy #2 |
| (MASTER)   |             | (BACKUP)   |
| 10.1.0.10  |             | 10.2.0.10  |
+------------+             +------------+
     |                           |
     |       Load Balancing      |
     -----------------------------
     |       |       |       |
+---------+ +---------+ +---------+
| Backend | | Backend | | Backend |
| Server1 | | Server2 | | Server3 |
+---------+ +---------+ +---------+

  • Clients connect to a virtual IP (VIP) managed by Keepalived.
  • HAProxy instances listen on this VIP and distribute traffic to backend servers.
  • If the master HAProxy node fails, Keepalived promotes the backup node and reassigns the VIP instantly.

Practical guide

This guide is written from the perspective of using a Debian 12 based OS running haproxy/bookworm-backports-2.8,now 2.8.15-1~bpo12+1 amd64 and keepalived/oldstable,now 1:2.2.7-1+b2 amd64 but should still be applicable for any other distribution using similar tooling.

Setting up Keepalived

Install

Ensure that Keepalived is installed on both the primary and backup servers. You can install it using your system's package manager:

For Debian/Ubuntu:

sudo apt install keepalived -y

Primary server

Once installed edit the Keepalived configuration file on the primary server. The default configuration file is usually located at /etc/keepalived/keepalived.conf. Use your preferred text editor to open the file:

# /etc/keepalived/keepalived.conf
global_defs {
    enable_script_security
}

vrrp_script check-script {
   script "/usr/bin/pgrep haproxy"
   interval 2
   user root
}

vrrp_instance haproxy-vip-lc0-haproxy-01 {
    state MASTER
    priority 101
    interface ens18
    virtual_router_id 10

    advert_int 1

    unicast_src_ip 10.1.0.10 dev ens18
    unicast_peer {
        10.2.0.10
    }

    authentication {
        auth_type PASS
        auth_pass vagrant
    }

    virtual_ipaddress {
        10.0.0.100 dev ens18
    }
    track_script {
       check-script
    }
}

Primary server: /etc/keepalived/keepalived.conf

Note for the primary server, the state should be MASTER and priority higher than the secondary server.

Secondary Server(s)

# /etc/keepalived/keepalived.conf
global_defs {
    enable_script_security
}

vrrp_script check-script {
   script "/usr/bin/pgrep haproxy"
   interval 2
   user root
}

vrrp_instance haproxy-vip-lc0-haproxy-02 {
    state BACKUP
    priority 100
    interface ens18
    virtual_router_id 10

    advert_int 1

    unicast_src_ip 10.2.0.10 dev ens18
    unicast_peer {
        10.1.0.10
    }

    authentication {
        auth_type PASS
        auth_pass vagrant
    }

    virtual_ipaddress {
        10.0.0.100 dev ens18
    }
    track_script {
       check-script
    }
}

Secondary server: /etc/keepalived/keepalived.conf

Note for the secondary server, the state should be BACKUP and priority lower than the primary server.


Setting up HAProxy

Installing HAProxy requires a bit of apt repository configuration to get the desired HAProxy 2.8 the Debian site gives a handy helper page which gives instructions for selecting your target OS and version for specific HA Proxy versions, this can be found here: https://haproxy.debian.net/#distribution=Debian&release=bookworm&version=2.8

Once the steps are followed HAProxy 2.8 can be installed as follows

# apt-get update
# apt-get install haproxy=2.8.\*

Once installed you can go ahead and edit the HAProxy config file /etc/haproxy/haproxy.cfg

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	lua-load /etc/haproxy/cors.lua

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

listen stats
        bind    10.1.0.10:9000
        mode    http
        
        stats   enable
        stats   hide-version
        stats   uri       /
        stats   refresh   30s

frontend web
    bind :80

    acl frontend-ghost hdr_sub(host) -i ghost.atathome.me

    use_backend backend-ghost if frontend-ghost

backend backend-ghost
    server lc0-ghost 10.0.0.1:80 check
    timeout server 30s

/etc/haproxy/haproxy.cfg

This is pretty much the out of the box config with an addition of a sample backend, this should be changed on both the primary and secondary server. Note the listen stats bind address should not be the VIP and be the management IP.


Test and Verify

After configuring Keepalived on both servers, test the configuration for syntax errors:

sudo keepalived -t

If there are no errors, start Keepalived on both servers:

sudo systemctl start keepalived

You can verify the high availability setup by checking the VIP address. On the primary server, the VIP should be active. On the backup server, it should be in a backup state. You can use the following command to check the status:

~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:e1:08:40 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 10.1.0.10/22 brd 10.1.0.255 scope global ens18
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global ens18
       valid_lft forever preferred_lft forever

ip a output logs

You should see the VIP address (10.0.0.100 in this example) on the active server and not on the standby server.

Keepalived will automatically manage the failover process. If the primary server goes down or HAProxy becomes unresponsive, the VIP will migrate to the backup server.

If the primary server has not attached the VIP you can debug errors using journal

# journalctl -xefu keepalived
Sep 29 10:35:37 lc0-haproxy-01 Keepalived_vrrp[55583]: Script `check-script` now returning 0
Sep 29 10:35:37 lc0-haproxy-01 Keepalived_vrrp[55583]: VRRP_Script(check-script) succeeded
Sep 29 10:35:37 lc0-haproxy-01 Keepalived_vrrp[55583]: (haproxy-vip-lc0-haproxy-01) Entering BACKUP STATE
Sep 29 10:35:37 lc0-haproxy-01 Keepalived_vrrp[55583]: (haproxy-vip-lc0-haproxy-01) received lower priority (100) advert from 10.2.0.10 - discarding
Sep 29 10:35:38 lc0-haproxy-01 Keepalived_vrrp[55583]: (haproxy-vip-lc0-haproxy-01) received lower priority (100) advert from 10.2.0.10 - discarding
Sep 29 10:35:39 lc0-haproxy-01 Keepalived_vrrp[55583]: (haproxy-vip-lc0-haproxy-01) received lower priority (100) advert from 10.2.0.10 - discarding
Sep 29 10:35:40 lc0-haproxy-01 Keepalived_vrrp[55583]: (haproxy-vip-lc0-haproxy-01) Entering MASTER STATE

Journal logs for keepalived

When successful you should see the message Entering MASTER STATE


Benefits of This Setup

  • Fault tolerance – no single point of failure at the load balancer layer.
  • Scalability – distribute traffic across multiple backend servers.
  • Minimal downtime – failover happens automatically and quickly.
  • Flexibility – works with web apps, APIs, databases, and more.

Conclusion

By combining HAProxy and Keepalived, you can build a highly available, fault-tolerant, and scalable load balancing infrastructure. HAProxy ensures smart traffic distribution, while Keepalived keeps the load balancer itself redundant and resilient.

Whether you’re running a high-traffic website, an API service, or a critical enterprise application, this duo is a tried-and-true solution for uptime and reliability.