How do I ensure my add-on works properly in a cluster?
Clustering in Confluence is supposed to work mostly transparently for add-on developers. However, there are a few things to be aware of in more advanced add-on.
Please note: this guide is always open for expansion – please comment on the page if you think something should be added here.
Installation of add-on in a cluster
Installation for the Confluence cluster administrator is the same as with a single instance. Uploading a add-on through the web interface will store the add-on in the PLUGINDATA table in the database, and ensure that it is loaded on all instances of the cluster.
Cluster instances must be homogeneous, so you can assume the same performance characteristics and version of Confluence running on each instance.
Testing your add-on in a cluster
It is important to test your add-on in a cluster if you want to make sure it works properly. Setting up a cluster with Confluence is as easy as setting up two new instances on the same machine with a cluster license – it shouldn't take more than ten minutes to test your add-on manually.
If you need access to a cluster license for Confluence, get a timebomb license.
Using the Confluence Data Center Plugin Validator
You can use the Confluence Data Center Plugin Validator to check your plugin. The tool finds where plugins are attempting to store non Serializable data into an Atlassian Cache. Read more about using the Confluence Data Center Plugin Validator.
Caching in a cluster
In many simple add-ons, it is common to cache data in a field in your object – typically a
WeakHashMap. This caching will not work correctly in a cluster because updating the data on one instance will make the cached data on the other instance stale.
The solution is to use the caching API provided with Confluence, Atlassian Cache. For example code and a description of how to cache data correctly in Confluence, see:
Both keys and values of data stored in a cache in a Confluence cluster must implement
Without any intervention, scheduled tasks will execute independently on each Confluence instance in a cluster. In some circumstances, this is desirable behaviour. In other situations, you will need to use cluster-wide locking to ensure that jobs are only executed once per cluster.
The easiest way to do this is to use the
perClusterJob attribute on your
job module declaration, as documented on the Job Module page.
In some cases you may need to implement locking manually to ensure the proper execution of scheduled tasks on different instances. See the locking section below for more information on this.
The locking primitives provided with Java (
synchronized, etc.) will not properly ensure serialised access to data in a cluster. Instead, you need to use the cluster-wide lock that is provided through the Beehive ClusterLockService API.
Confluence 5.5 onwards
Below is an example of using a cluster-wide lock via ClusterLockService.getLockForName() under Confluence 5.5:
Confluence 5.4 and earlier
Below is an example of using a cluster-wide lock via Confluence's ClusterManager API under Confluence 5.4 and earlier.
By default, Confluence events are only propagated on the instance on which they occur. This is normally desirable behaviour for add-ons, which can rely on this to only respond once to a particular event in a cluster. It also ensures that the Hibernate-backed objects which are often included in an event will still be attached to the database session when interacting with them in your plugin code.
If your plugin needs to publish events to other nodes in the cluster, we recommend you do the following:
- Ensure the event extends ConfluenceEvent class and implements ClusterEvent interface
- Listen for ClusterEventWrapper event and perform
wrapper.getEvent()in order to receive the event on remote nodes
Like clustered cache data, events which are republished across a cluster can only contain fields which implement
Serializable or are marked
transient. In some cases, it may be preferable to create a separate event class for cluster events which includes object IDs rather than Hibernate-backed objects. Other instances can then retrieve the data from the database themselves when processing the event.
Confluence will only publish cluster events when the current transaction is committed and complete. This is to ensure that any data you store in the database will be available to other instances in the cluster when the event is received and processed.
In a clustered environment Confluence has both a local home and a shared home. The following table shows what is stored in each.
|Local Home||Shared Home|
The now deprecated method
will return the shared home. Two new methods
getLocalHome() return the shared home and the local home respectively.
Add-ons will need to decide the most appropriate place to store any data they place on the file system, however the shared home should be the correct place in most scenarios.
On standalone environment
will return the local home whereas
getSharedHome() will return "shared-home" directory inside local home directory.
Marking your add-on as cluster compatible for the Marketplace
When you list your first cluster-compatible add-on version in the Marketplace, modify your
atlassian-plugin.xml descriptor file. This tells the Marketplace and UPM that your add-on is cluster compatible. Add the following parameter inside the
Here's an example of a generic
plugin-info block with this param:
Important note: plugins should not cache licenses or license states as this will prevent license changes being correctly propagated to all nodes in the cluster. UPM will handle any caching and performance improvement.