How do I ensure my add-on works properly in a cluster?

Clustering in Confluence is supposed to work mostly transparently for add-on developers. However, there are a few things to be aware of in more advanced add-on.

Please note: this guide is always open for expansion – please comment on the page if you think something should be added here.

Installation of add-on in a cluster

Installation for the Confluence cluster administrator is the same as with a single instance. Uploading a add-on through the web interface will store the add-on in the PLUGINDATA table in the database, and ensure that it is loaded on all instances of the cluster.

Cluster instances must be homogeneous, so you can assume the same performance characteristics and version of Confluence running on each instance.

Testing your add-on in a cluster

It is important to test your add-on in a cluster if you want to make sure it works properly. Setting up a cluster with Confluence is as easy as setting up two new instances on the same machine with a cluster license – it shouldn't take more than ten minutes to test your add-on manually.

If you need access to a cluster license for Confluence, get a timebomb license.  

Using the Confluence Data Center Plugin Validator

You can use the Confluence Data Center Plugin Validator to check your plugin. The tool finds where plugins are attempting to store non Serializable data into an Atlassian Cache. Read more about using the Confluence Data Center Plugin Validator.

Caching in a cluster

In many simple add-ons, it is common to cache data in a field in your object – typically a ConcurrentMap or WeakHashMap. This caching will not work correctly in a cluster because updating the data on one instance will make the cached data on the other instance stale.

The solution is to use the caching API provided with Confluence, Atlassian Cache. For example code and a description of how to cache data correctly in Confluence, see:

Both keys and values of data stored in a cache in a Confluence cluster must implement Serializable.

Scheduled tasks

Without any intervention, scheduled tasks will execute independently on each Confluence instance in a cluster. In some circumstances, this is desirable behaviour. In other situations, you will need to use cluster-wide locking to ensure that jobs are only executed once per cluster.

The easiest way to do this is to use the perClusterJob attribute on your job module declaration, as documented on the Job Module page.

In some cases you may need to implement locking manually to ensure the proper execution of scheduled tasks on different instances. See the locking section below for more information on this.

Cluster-wide locks

The locking primitives provided with Java (java.util.concurrent.Locksynchronized, etc.) will not properly ensure serialised access to data in a cluster. Instead, you need to use the cluster-wide lock that is provided through the Beehive ClusterLockService API.

Confluence 5.5 onwards

Below is an example of using a cluster-wide lock via ClusterLockService.getLockForName() under Confluence 5.5:

ClusterLock lock = clusterLockService.getLockForName(getClass().getName() + ".taskExecutionLock");
if (lock.tryLock()) {
    try {
        log.info("Acquired lock to execute task");
        executeTask();
    }
    finally {
        lock.unlock();
    }
}
else {
    log.info("Task is running on another instance");
}

Backward compatability

For compatibility across versions of Confluence prior to 5.5, Beehive provides the compatibility library beehive-compat, which supports the Beehive API across older versions of Confluence.

Confluence 5.4 and earlier

Below is an example of using a cluster-wide lock via Confluence's ClusterManager API under Confluence 5.4 and earlier.

ClusteredLock lock = clusterManager.getClusteredLock(getClass().getName() + ".taskExecutionLock");
if (lock.tryLock()) {
    try {
        log.info("Acquired lock to execute task");
        executeTask();
    }
    finally {
        lock.unlock();
    }
}
else {
    log.info("Task is running on another instance");
}

Deprecated

This API has been deprecated in Confluence 5.6, and should not be relied on in future versions of Confluence.

Event handling

By default, Confluence events are only propagated on the instance on which they occur. This is normally desirable behaviour for add-ons, which can rely on this to only respond once to a particular event in a cluster. It also ensures that the Hibernate-backed objects which are often included in an event will still be attached to the database session when interacting with them in your plugin code.

If your plugin needs to publish events to other nodes in the cluster, we recommend you do the following:

  1. Ensure the event extends ConfluenceEvent class and implements ClusterEvent interface
  2. Listen for ClusterEventWrapper event and perform instanceof check to wrapper.getEvent() in order to receive the event on remote nodes
Example clustered event listener
public class MyClusterEventListener {
    @EventListener
    public void handleLocalEvent(MyClusterEvent event) {
        // Handle event originating from local node
    }

    @EventListener
    public void handleRemoteEvent(ClusterEventWrapper wrapper) {
        Event event = wrapper.getEvent();
        if (event instanceof MyClusterEvent) {
            // Handle event originating from remote node
        }
    }
}

Like clustered cache data, events which are republished across a cluster can only contain fields which implement Serializable or are marked transient. In some cases, it may be preferable to create a separate event class for cluster events which includes object IDs rather than Hibernate-backed objects. Other instances can then retrieve the data from the database themselves when processing the event.

Confluence will only publish cluster events when the current transaction is committed and complete. This is to ensure that any data you store in the database will be available to other instances in the cluster when the event is received and processed.

Home Directory

In a clustered environment Confluence has both a local home and a shared home. The following table shows what is stored in each.

Local Home Shared Home
  • Logs
  • Lucene Index
  • temp
  • confluence.cfg.xml
  • add-ons
  • Data including attachments and avatars
  • Backup/Restore files
  • temp

The now deprecated method BootstrapManager.getConfluenceHome() will return the shared home. Two new methods getSharedHome() and getLocalHome() return the shared home and the local home respectively.

Add-ons will need to decide the most appropriate place to store any data they place on the file system, however the shared home should be the correct place in most scenarios.

On standalone environment BootstrapManager.getConfluenceHome() will return the local home whereas getSharedHome() will return "shared-home" directory inside local home directory.

Marking your add-on as cluster compatible for the Marketplace

When you list your first cluster-compatible add-on version in the Marketplace, modify your atlassian-plugin.xml descriptor file. This tells the Marketplace and UPM that your add-on is cluster compatible. Add the following parameter inside the plugin-info section:

		<param name="atlassian-data-center-compatible">true</param>

Here's an example of a generic plugin-info block with this param:

    <plugin-info>
        <description>${project.description}</description>
        <version>${project.version}</version>
        <vendor name="${project.organization.name}" url="${project.organization.url}" />
        <param name="atlassian-data-center-compatible">true</param>
    </plugin-info>

Important note: plugins should not cache licenses or license states as this will prevent license changes being correctly propagated to all nodes in the cluster. UPM will handle any caching and performance improvement. 

RELATED TOPICS

How do I cache data in a plugin?

Confluence Data Center Plugin Validator

Technical Overview of Clustering in Confluence

Confluence Clustering Overview

Was this page helpful?

Have a question about this article?

See questions about this article

Powered by Confluence and Scroll Viewport