Configuring a JIRA Cluster for Plugin development
High level guide
- Install and configure the latest JIRA as normal, ie. without the cluster-specific settings mentioned further below.
- Startup this instance and load it with your JIRA test plan data.
- Shutdown this instance.
- Install a second node without launching it yet.
- Set the cluster settings on both instances as mentioned below:
- Point to the same database(the dbconfig.xml files on each should probably be identical) and an empty shared home.
- Create the cluster.properties file for each node
- Copy the following directories from the local home of the first node to the shared home (some may be empty) :
- Start the already-configured instance.
- Install your plugin and add the license if needed.
- Start the second instance. It will attach itself to the DB, and then fill in its local home and attach itself to the shared home. It will also then copy across your plugin.
Each node on a separate machine
Each JIRA node (two in this example) runs on its own machine (physical or virtual), with a third machine for the shared services. In production, the shared services will most likely run on separate machines from each other.
- On the third machine, set up a shared home directory that is writable by both servers.
- Set up the two JIRA servers on different machines. These servers should:
- Have a
cluster.propertiesin the local JIRA home directory (see example below).
- Be configured to use the same context path.
- Be configured to use the same database. The
dbconfig.xmlfiles on each should probably be identical.
- Have full clustering enabled, by appending the following setting to the
JVM_EXTRA_ARGSvariable in the
- Have their Apache node name set, by appending the following setting to the same variable (replacing
node1with the node name used in the Apache load balancer configuration):
- Have a
- Ensure the Base URL configured in JIRA is the URL of the front end proxy / load balancer.
httpd configuration (no context path)
We need to configure
httpd similarly to a standard reverse proxy, but with the addition of the
To run JIRA at
http://MyCompanyServer/, add a configuration block similar to this at the end of
httpd configuration (with a context path)
Some slight changes to the above configuration are required if JIRA is deployed under a context path. To run JIRA at
http://MyCompanyServer/jira/, add a configuration block similar to this at the end of
Single host configuration
For development purposes, the entire cluster can be run on a single machine.
- Each JIRA node must run on different ports.
- Ehcache must be configured to use different settings; edit
the cluster.propertiesfile and add "
port" as follows:
- Validating that caches are being replicated across the cluster correctly.
To test if caches are being replicated correctly between the two nodes in the cluster.
- Log in to one node in the cluster. Going directly to the node is easiest, bypassing any load balancer.
- Go to Administration / Issue Types and edit the name of an issue type
- Log in to the other node(s) in the cluster
- Go to Administration / Issue Types and check that the edited name from step-b appears correctly.
If the new value is no seen on the other nodes then the cluster is not communicating properly.
- You may need to disable your firewall, or at least allow the ports configured above to pass through. Some systems, especially later versions of linux block these even on the internal localhost network.
- You need to ensure multicast is supported. For Linux you may need to turn it on. Multicast is often not enabled for the local host.
- Each server needs to be able to resolve its own host name correctly. This is not as obvious as it seems and errors here can be difficult to detect
Some linux distributions will add entries to /etc/hosts such as
This may cause ehcache to announce itself to other nodes in the cluster as being located at 127.0.1.1. This is not helpful and will result in cache inconsistency across the cluster. You can set the logging level to ehcache in log4j.properties to trace to try and diagnose this sort of error.
Try removing the line refering to 127.0.1.1 from /etc/hosts or specify the hostName property for the cacheManagerPeerListenerFactory in the cluster.properties