Clustering Openfire - Unicast

This document details how to use the Openfire Clustering plugin and Oracle Coherence.

Pre-Installation

It pre-supposes the following:

  • Two or more Openfire installs running version: 3.8.2

  • Running on Red Hat Linux or CentOS (if Debian/Ubuntu or Windows update your paths accordingly)

  • All Openfire systems are configured to point to the SAME database.

  • All Openfire systems are configured with the SAME XMPP domain

  • Coherence 3.7.1 library has been downloaded from Oracle’s website (From what I understand Oracle Coherence is free on up to 10000 Openfire nodes after which it becomes chargeable).

In this example we will assume we have two Openfire servers,* host1.example.com* and* host2.example.com*.

The XMPP domain will be cluster.example.com.

DNS Setup

Before configuring Openfire the following network setup should be in place for a standard DNS Round Robin setup. This type of setup offers load balancing and failover:

  • Unique static IP addresses for each CAS Server ie. **192.168.3.3.66, 192.168.3.67 **for the purposes of this example.
  • DNS A Record for each CAS Cluster Node ie. host1.example.com, host2.example.com
  • For a redundant environment two DNS A records with the same name (ie. cluster.example.com) pointing to host1 and* host2. *

Building/Adding the clustering plugin

You will need to download the Coherence 3.7.1 library from Oracle’s website and the Openfire source code. In the Coherence library you will find two jars coherence-work.jar and coherence.jar.

Drop both into the Openfire clustering plugin lib folder:

Openfire/src/plugins/clustering/lib.

Build the clustering plugin which will give you a built clustering.jar plugin.

Once you have your two or more Openfire servers setup, pointing to the same database and with the same XMPP domain install the clustering.jar plugin.

Coherence XML Files

Before the clustering plugin will work we need to specify some extra parameters for Oracle Coherence to function.

Before making the changes below stop the Openfire service

$ service openfire stop

On the primary server drop the two XML files coherence-cache-config.xml and tangosol-coherence-override.xml to /opt/openfire/lib/ found in

Openfire/src/plugins/clustering/include:

tangosol-coherence-override.xml

Edit the file /opt/openfire/lib/tangosol-coherence-override.xml and comment/delete the*** *** block. Add the unicast block just below the*** *** closing tag. In the block below wka1 is the primary host, wka2 is the secondary host and wka3 is an optional tertiary hosts. Additional hosts can be added in the same manner.

Configure your config so that wka1 has the IP address of the primary Openfire node and wka2 the IP address of the secondary Openfire node. Ensure socket-address is a unique and incrementing id for each node.

eg.

<unicast-listener>
            <well-known-addresses>
                <socket-address id="1">
                    <address system-property="tangosol.coherence.wka1">192.168.3.66</address>
                    <port system-property="tangosol.coherence.wka1.port">8088</port>
                </socket-address>
            </well-known-addresses>
            <well-known-addresses>
                <socket-address id="2">
                    <address system-property="tangosol.coherence.wka2">192.168.3.67</address>
                    <port system-property="tangosol.coherence.wka2.port">8088</port>
                </socket-address>
            </well-known-addresses>
            <well-known-addresses>
                <socket-address id="3">
                    <address system-property="tangosol.coherence.wka3">192.168.3.68</address>
                    <port system-property="tangosol.coherence.wka3.port">8088</port>
                </socket-address>
            </well-known-addresses>
</unicast-listener>

Copy this file along with **coherence-cache-config.xml **onto every Openfire node and drop it into:

/opt/openfire/lib/

You should now have the two files coherence-cache-config.xml and your configured* *tangosol-coherence-override.xml file in place in /opt/openfire/lib.

Openfire Java Arguments

Assuming your Java arguments are specified in /etc/sysconfig/openfire you need to add the following arguments to** **OPENFIRE_OPTS ****Where localhost is the network IP address of the machine, **wka2 **the IP of wka2 and **wka1 **the IP Of wka1. Ensure machineid is a completely unique value across the cluster.

Assuming host1 and host1 have an IP of 192.168.3.66 and 192.168.3.67 respectively the entry would be:

-Dtangosol.coherence.edition=EE -Dtangosol.coherence.mode=prod -Dtangosol.coherence.localhost=192.168.3.66 -Dtangosol.coherence.machineid=10 -Dtangosol.coherence.wka1=192.168.3.66 -Dtangosol.coherence.wka2=192.168.3.67

On additional cluster nodes add the same entry but modify the parameter localhost to the IP of that node and give it a unique machineid. eg.

-Dtangosol.coherence.edition=EE -Dtangosol.coherence.mode=prod -Dtangosol.coherence.localhost=192.168.3.67 -Dtangosol.coherence.machineid=11 -Dtangosol.coherence.wka1=192.168.3.66 -Dtangosol.coherence.wka2=192.168.3.67

Once the configuration changes are complete start the Openfire service.

$ service openfire start

In Openfire Admin Console you’ll need to enable Clustering under** Server > Server Manager > Clustering**. Check Enabled.

With all that done you should now be clustered!

Where or how do you add the additional JVM parameters if you are not running openfire through an IDE like Eclipse?

best

Mark

hey thanks for your response, i added the above JVM parameters to my properties but im unsure if it worked, as i did not find any updates in files like you mentioned. I did not see any difference in my clustering behavior also.

cool i see. Yes i see the cluster in my overview. Just i later found out multi cast packets are now allowed in the network so it appeared there anyway. Hard to know if the above setup is working.

Hi,

please use a forum for questions.

JVM parameters (like -Xmx512m -Dfoo=bar) are specified in openfired.vmoptions or in the linux start script.

LG

I got this working on Amazon EC2 using the method above! Many thanks