High Availability Linux Web Server Example

Just a quick example of setting up a Linux HA failover environment for an Apache/MySQL web server. This runs through a Ubuntu installation, however RedHat shouldn’t vary too much. – fixed IP of server1 – fixed IP of server2 – apache site1 (virtual IP) – apache site2 (virtual IP)

Make sure both servers have a sensible hostname and ensure their hosts files relate to this:

etc/hosts localhost ubuntu-server-1.mydomain.com ubuntu-server-1 ubuntu-server-2.mydomain.com ubuntu-server-2

Set up SSH keys so that root can log onto each server. No passphrase is needed:

mkdir /root/.ssh
cd /root/.ssh
ssh-keygen -t rsa
chmod 600 id_rs*
scp id_rs* root@ubuntu-server-2:/root/.ssh
cat id_rsa.pub >> authorized_keys2
ssh root@ubuntu-server-2 "echo \`cat /root/.ssh/id_rsa.pub\` >> ~/.ssh/authorized_keys2"

Set up the fixed IP on each server in /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static

Restart networking and ensure connectivity is working:
/etc/init.d/networking restart

Install the HA stuff on both servers:
apt-get install heartbeat pacemaker

Install Apache on both servers and disable the startup scripts as pacemaker will be controlling the service:

apt-get install apache2
/etc/init.d/apache2 stop
chkconfig apache2 off

Add the virtual site IP’s to the apache configuration on both servers:


If you have a line “Listen 80″ without an IP, you will need to remove this.

Add the relevant virtualhost entries for your sites in /etc/apache2/sites-enabled on both servers

ServerAdmin webmaster@localhost
DocumentRoot /var/www

ServerAdmin webmaster@localhost
DocumentRoot /var/www2

This might be a good page to start with for testing:

<p>This is the default web page for this server.</p>
echo "Running on ".$hostname.$_SERVER['SERVER_NAME'];

Add something like this to server1 and server2 /etc/heartbeat/ha.cf. The server its on will ignore its own IP:

logfile /var/log/ha-log
logfacility local0
udpport 694
keepalive 2
warntime 15
deadtime 12
initdead 30
ucast eth0
ucast eth0
node ubuntu-server-1 ubuntu-server-2
auto_failback on
respawn hacluster /usr/lib/heartbeat/ipfail
crm respawn

On both servers, edit /etc/ha.d/authkeys, add the following. Use a strong password/hash.
You can generate some MD5 by echo "lsdknfnlsd;skrwerkorkwprek" | openssl md5:

auth 1
1 md5 $1$shRyHw.b$hEMxuYID7wEsK1mvGq8

chmod 600 /etc/ha.d/authkeys

Start the heartbeat service on both servers. Make sure the log exists and check for errors:
/etc/init.d/heartbeat start

Install DRBD on both servers:

apt-get install drbd8-utils build-essential psmisc
chkconfig drbd off

vi /etc/drbd.d/r0.res with something like the following on both servers. Use a sensible shared key:

resource r0 {
protocol C;
syncer {
rate 4M;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
net {
cram-hmac-alg sha1;
shared-secret "RUBBERDUCK";
on ubuntu-server-1 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;
on ubuntu-server-2 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;

For further filesystems use r1.res r2.res etc. drbd0 becomes drbd1, also the port number must increment. 7788 becomes 7789. Probably a good idea to use a DRBD filesystem for the web app files, unless they are rarely updated, then rsync etc could be an option. In this example, I’m using a plain virtualbox emulated SATA disk. In reality using LVM might be a good idea.

Blank out the partition to be used for DRBD
dd if=/dev/zero of=/dev/sdb bs=1024k
drbdadm create-md r0

Do this again on the second server

/etc/init.d/drbd start

Run this on the primary server to sync the data (even though its currently empty)
drbdadm -- --overwrite-data-of-peer primary r0

It should take a little while, check progress with watch -n1 cat /proc/drbd

On the primary server, create an ext3 filesystem on the DRBD disk:

drbdadm primary r0
mkfs.ext3 /dev/drbd0

On both servers, install mysql

apt-get install mysql-server
service mysql stop
chkconfig mysql-server off

Mount MySQL data onto the DRBD disk on primary server:

mkdir /root/mysql_bak
cp -Ra /var/lib/mysql/* /root/mysql_bak/
rm -rf /var/lib/mysql/*
drbdadm primary r0
mount /dev/drbd0 /var/lib/mysql
cp -Ra /root/mysql_bak/* /var/lib/mysql/
chown mysql:mysql /var/lib/mysql
umount /var/lib/mysql
drbdadm secondary r0
ssh root@ubuntu-server-2 "rm -rf /var/lib/mysql/*"

Time to configure the resources:

crm configure edit

Replace the data with:

node $id="05a131a7-7f92-4442-be0f-73fa722f1bb4" ubuntu-server-1 \
attributes standby="off"
node $id="9d31841c-5537-4bc3-b2c2-bfa5021ef880" ubuntu-server-2
primitive apache2 lsb:apache2 \
op monitor interval="5s" \
meta target-role="Started"
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="15s"
primitive fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/r0" directory="/var/lib/mysql" fstype="ext3" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="120"
primitive ip1 ocf:heartbeat:IPaddr2 \
params ip="" nic="eth0:0"
primitive ip1arp ocf:heartbeat:SendArp \
params ip="" nic="eth0:0"
primitive ip2 ocf:heartbeat:IPaddr2 \
params ip="" nic="eth0:0"
primitive ip2arp ocf:heartbeat:SendArp \
params ip="" nic="eth0:0"
primitive mysql ocf:heartbeat:mysql \
params binary="/usr/bin/mysqld_safe" config="/etc/mysql/my.cnf" user="mysql" group="mysql" log="/var/log/mysql.log" pid="/var/run/mysqld/mysqld.pid" datadir="/var/lib/mysql" socket="/var/run/mysqld/mysqld.sock" \
op monitor interval="30s" timeout="30s" \
op start interval="0" timeout="120" \
op stop interval="0" timeout="120"
group MySQLDB fs_mysql mysql \
meta target-role="Started"
group WebServices ip1 ip1arp ip2 ip2arp apache2 \
meta target-role="Started"
ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
location cli-prefer-MySQLDB MySQLDB 100: ubuntu-server-1
location cli-prefer-WebServices WebServices 100: ubuntu-server-1
colocation ip_with_arp inf: ip1 ip1arp ip2 ip2arp
colocation mysql_on_drbd inf: MySQLDB ms_drbd_mysql:Master
colocation web_with_ip inf: apache2 ip1 ip2
colocation web_with_mysql inf: apache2 MySQLDB
order arp_after_ip inf: ip1:start ip1arp:start ip2:start ip2arp:start
order fs-mysql-after-drbd inf: ms_drbd_mysql:promote fs_mysql:start
order mysql-after-fs-mysql inf: fs_mysql:start mysql:start
order web_after_ip inf: ip1arp:start ip2arp:start apache2:start
property $id="cib-bootstrap-options" \
dc-version="1.0.9-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
cluster-infrastructure="Heartbeat" \
expected-quorum-votes="1" \
stonith-enabled="false" \
rsc_defaults $id="rsc-options" \

Pay attention to the order. For example, you will want MySQL to start, after the MySQL filesystem has been mounted
Resource stickiness sets whether the resources should fail back if the primary server comes back online. 0 should fail back, whereas 100 should keep it on the secondary node, ready for a manual failback.
Location sets the preference server for the resource to run on
Colocation sets whether resources should always run on the same server.

Save the changes – this should automatically replicate to server2.

Things should now be up and running, run crm_mon to check status. It should look a little like:

Last updated: Thu Aug 25 11:38:59 2011
Stack: Heartbeat
Current DC: ubuntu-server-2 (9d31841c-5537-4bc3-b2c2-bfa5021ef880) – partition with quoru
Version: 1.0.9-da7075976b5ff0bee71074385f8fd02f296ec8a3
2 Nodes configured, 1 expected votes
3 Resources configured.

Online: [ ubuntu-server-2 ubuntu-server-1 ]

Resource Group: WebServices
ip1 (ocf::heartbeat:IPaddr2): Started ubuntu-server-1
ip1arp (ocf::heartbeat:SendArp): Started ubuntu-server-1
ip2 (ocf::heartbeat:IPaddr2): Started ubuntu-server-1
ip2arp (ocf::heartbeat:SendArp): Started ubuntu-server-1
apache2 (lsb:apache2): Started ubuntu-server-1
Resource Group: MySQLDB
fs_mysql (ocf::heartbeat:Filesystem): Started ubuntu-server-1
mysql (ocf::heartbeat:mysql): Started ubuntu-server-1
Master/Slave Set: ms_drbd_mysql
Masters: [ ubuntu-server-1 ]
Slaves: [ ubuntu-server-2 ]

You should now see your sites available at

You should really consider using STONITH to ringfence a dodgy node. Stonith plugins exist for Dell DRAC / IBM RAS etc thesedays.

Some Useful commands:

crm – takes you into the crm console. This tab completes commands so is quite intuitive.
crm_mon – status monitoring
crm status – the same
crm configure edit – edit config
crm configure show – show config
crm (-F) resource move WebServices ubuntu-server-2 – force single resource/resourcegroup to another server
crm unmove Webservices – move resources back to primary

crm node standby ubuntu-server-1 – take node offline, resources will failover to other server
crm node online ubuntu-server-1 – bring node online
crm resource restart/start/stop apache2 – manage a resource (you cant use init scripts anymore!)




You can leave a response, or trackback from your own site.

Leave a Reply