Gluster active/passive cluster

Gluster is a nice distributed file system which offers some management benefits over block level storage systems like DRBD. By design Gluster works in an active/active cluster configuration, however for applications where millisecond precision data replication is essential, an active/passive configuration is preferable. This is how (based on SLES11 with HAE):

usual SLES HA cluster creation:
create cluster key: corosync-keygen
add nodes to corosync.conf
ensure all nodes are in hosts file
chkconfig openais on

vi /etc/sysctl.conf
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1

sysctl -p

install RPM’s from Gluster community:
rpm -Uvh glusterfs-3.4.2-1.x86_64.rpm glusterfs-cli-3.4.2-1.x86_64.rpm glusterfs-fuse-3.4.2-1.x86_64.rpm glusterfs-libs-3.4.2-1.x86_64.rpm glusterfs-server-3.4.2-1.x86_64.rpm glusterfs-resource-agents-3.4.2-1.x86_64.rpm

Disable gluster boot as this will be managed by the cluster:
chkconfig glusterfsd off
chkconfig glusterd off

Add a disk:
fdisk /dev/sdb
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
vi /etc/fstab
/dev/sdb1 /data/brick1 xfs defaults 1 2
mount -a && mount

Create the gluster relationship:
On server1:
gluster peer probe server2
On server 2:
gluster peer probe server1

Create the gluster volume:
mkdir /data/brick1/volume1
gluster volume create volume1 replica 2 server1:/data/brick1/volume1 server2:/data/brick1/volume1
gluster volume start volume1

verify with :
gluster volume status volume1

make a local mount directory if required:
mkdir /opt/volume1

add to /etc/sysconfig/network/ifcfg-lo for your Gluster VIP:

Because this will be controlled by the cluster and the glusterfs ocf agent, there is no need for volume files.

Create the cluster config in OpenAIS:

node server1
node server2
primitive p_gluster ocf:glusterfs:glusterd \
op monitor interval=”10s” timeout=”10s” \
meta migration-threshold=”10″
primitive p_volume1 ocf:heartbeat:Filesystem \
params fstype=”glusterfs” device=”localhost:/volume1″ directory=”/opt/volume1″ \
op monitor interval=”10s” timeout=”10s”
primitive p_vip_gluster ocf:heartbeat:IPaddr2 \
params ip=”″ cidr_netmask=”24″ broadcast=”″ \
op monitor interval=”2s” timeout=”2s”
clone cl_gluster p_gluster \
meta target-role=”Started”
clone cl_volume1 p_volume1 \
meta target-role=”Started”
location l_gluster_node1 p_vip_gluster \
rule $id=”gluster_node1″ 100: #uname eq server1
order o_storage inf: cl_gluster cl_gluster
property $id=”cib-bootstrap-options” \
dc-version=”1.1.10-65bb87e” \
cluster-infrastructure=”classic openais (with plugin)” \
stonith-enabled=”false” \
no-quorum-policy=”ignore” \
expected-quorum-votes=”2″ \

That should be enough to create the gluster cluster!

The most reliable way of mounting the gluster volumes from clients is by NFS. Add something similar to this to /etc/fstab:

#VIP-LB:/volume1 /opt/volume1 nfs mountproto=tcp,bg,intr,soft,defaults,_netdev 0 0

You can leave a response, or trackback from your own site.

Leave a Reply