Last updated Sep 20, 2024

Developing for high availability and clustering

At this time, there is no infrastructure supporting the activation and passivation of nodes and no  strong use case for this functionality has been identified. Generally, there should be no need for plugins to handle these events.

Jira provides a deployment option with clustering via Jira Data Center. The information on this page will help you develop your plugin in a high availability, clustered environment.

The cluster topology is for multiple active servers with the following characteristics:

  • The Jira instances share a single relational database instance.
  • The Lucene Index is replicated in almost real time and a copy kept local to each instance.
  • The Attachments are held in a shared location.
  • The Jira instances will keep their internal data caches consistent.
  • Multiple instances may be active at any one time.
  • Cluster-wide locks are available.
  • A load balancer will need to be configured to share requests between all active nodes in the cluster
    • The load balancer will not direct any requests to a node that is not active and redirect all requests for a session to the same node
  • Active servers will:
    • Process web requests.
    • Process background and recurring tasks. Scheduled tasks may be configured to run on one or all instances in the cluster.
    • In all practical ways behave exactly the same as a standalone Jira server.
  • Passive servers:
    • Do not process web requests.
    • Do not run any background or recurring tasks.
    • Are in a state so that they are able to take over the workload in the shortest amount of time, i.e all plugins loaded, Lucene index up to date, etc.

Simplified architectural diagram.


Plugin Responsibilities

Stateless plugins need not concern themselves with the fact they are in a clustered environment. For example, a plugin that only provides a custom field type need not be modified.

jira-home

In a clustered environment Jira supports a local home and a shared home.The following table shows what is stored in each.

Local Home
  • Logs
  • Caches, local caches which is basically the Lucene Index
  • tmp
  • Monitoring data
  • dbconfig.xml
  • cluster.properties
Shared Home
  • Data including attachments and avatars
  • Caches, shared
  • Export
  • Import
  • Plugins

The method JiraHome.getHome() returns the shared home. A new method getLocalHome() returns the local home. Similarly JiraHome.getHomePath() returns the path of the shared home and getLocalHomePath() returns the path to the local home.

Plugins need to decide which is the most appropriate place to store any data they place on the file system, however the shared home should be the correct place in most scenarios.  Keep in mind that the underlying file system will generally be on a network and may have unusual semantics, particularly with regard to file locking, file renaming, and temporary files.

Scheduled tasks and background processes 

If a plugin wishes to use a scheduled task, it should do so using the Atlassian scheduler API, which will be paused on any passive nodes and activated when those nodes become active.

The previously available PluginScheduler is deprecated and should not be used for future work except where backward compatible local scheduling is required.. Also note that any jobs scheduled via the PluginScheduler will be run on all Jira instances in the cluster.

The Atlassian Scheduler API is available via Maven as follows:

Maven Coordinates

1
2
<dependency>
   <groupId>com.atlassian.scheduler</groupId>
   <artifactId>atlassian-scheduler-api</artifactId>
   <version>1.0</version>
</dependency>

Note: The Atlassian Scheduler API is a transitive dependency of jira-api, so it should not be necessary to depend on it directly.

A plugin needs to do two things to schedule a job:

  1. Register a JobRunner and
  2. Schedule a Job.

Job Runner

A Job Runner is an instance of a class that implements JobRunner and that is used to perform the actual work required. The scheduler will call the runJob(final JobRunnerRequest jobRunnerRequest) method, passing a JobRunnerRequest object that will contain any parameters supplied when the job was scheduled.  The Job Runner is registered with a key that must be unique to the JobRunner, but any number of jobs can be scheduled to use the same JobRunner. It is suggested you use the plugin module key to namespace your JobRunner key.

Example

1
2
schedulerService.registerJobRunner(JobRunnerKey.of("com.example.schedulingplugin.JOBXX"), new SendFilterJob());

Plugin registration is transient and plugins should register and unregister their JobRunners on "plugin enabled" and "plugin disabled" events respectively.

Schedule a job

Jobs can be scheduled to run in one of two ways:

  1. Once across the whole cluster.  Jobs scheduled to run across the whole cluster:

    • Are persistent. They are stored in the database and persist across restarts of Jira and remain even if the plugin is disabled or removed.
      (Persisted jobs with no JobRunner are ignored until the JobRunner is registered again.)
    • Can run on any node in the cluster, not just the node that originally scheduled the job.
    • Will only run once for each scheduled run time
      In some cases, multiple instances of the same job might be running at the same time. This might happen when the job interval is shorter than the time required to complete the job. In such a case, a node might start a new job, although another instance of this job is still running on a different node in the cluster. To avoid this, restrict jobs to particular nodes by using cluster-wide locks.
    • Should only be scheduled by one node in the cluster.
      When a job is re-scheduled we remove and then add the job again. Under certain circumstances this may cause the job to miss a run time (if you use a cron expression) or perhaps run twice in succession (if you use a "run immediately" schedule).
      For this reason, we recommend a "best-effort" check if the job is already scheduled before trying to add a job to the schedule and the use of an initial start date that is a few seconds in the future in preference to using one that runs immediately.
  2. Locally. Jobs scheduled to run locally:

    • Are not persistent. They should be scheduled each time the plugin is enabled and unscheduled when the plugin is disabled.
    • Should be scheduled on every node in the cluster, if that is appropriate for your plugin's requirements.
    • Will run on the node where they are scheduled, and not on any other node (unless also scheduled on that node).

You may pass parameters to the scheduler when scheduling a job. These parameters must be Serializable, and while you may include classes that are defined by your plugin, this may make it more difficult for your plugin to update a pre-existing schedule if the object's serialized form has changed.  It is safest to store simple data types, like Lists, Maps, Strings, and Longs.

1
2
Schedule schedule = Schedule.forCronExpression(cronExpression);
JobConfig jobConfig = JobConfig.forJobRunnerKey(JobRunnerKey.of("com.example.schedulingplugin.JOBXX"))
                .withSchedule(schedule)
                .withParameters(ImmutableMap.<String, Serializable>of("SUBSCRIPTION_ID", subscriptionId));
JobId jobId = JobId.of("com.atlassian.example.schedulingplugin:" + subId);
schedulerService.scheduleJob(jobId, jobConfig);

You can either create your own JobId or let the system dynamically create one for you. To check the status of a job or unschedule a job, you will need the JobId.

A detailed example of using the SchedulerService is available on BitBucket.

Backward compatability

If you wish to maintain a single version of your plugin that is compatible with previous versions of Jira as well as be cluster compatible in Jira 6.3 and above, you will need to use the atlassian-scheduler-compat library.

This library does not provide the full capabilities of Atlassian Scheduler but does provide functional equivalence with the facilities in Jira prior to version 6.3. A fully worked example is available on Bitbucket.

Common Problems

  • The previous implementation of the SAL PluginScheduler created Services to represent plugin jobs. This association is broken by these changes and neither the atlassian-scheduler API nor SAL's PluginScheduler will continue to do this. Plugin developers may have previously instructed Jira administrators to consult the Services configuration page to verify the plugin installation or change the interval at which jobs run. The plugin jobs are no longer listed there, so this documentation needs to be updated to reference the Scheduler Details page, instead.
  • Scheduled jobs are no longer directly configurable through the administration UI. Plugin developers that want their jobs to be configurable will need to implement their own configuration pages to support that capability and update their documentation accordingly.
  • Scheduling jobs directly with the Quartz library is deprecated, and the ability to do so will be removed entirely in Jira 7.0.
  • Jobs that were scheduled directly with Quartz are not migrated by Jira's upgrade tasks. The plugin must make its own arrangements for migrating any jobs and triggers that it had created to the atlassian-scheduler API.

Executors

If a plugin uses an Executor to run recurring tasks, it should be sure to shut down the Executor on a NodePassivatingEvent and restart it on a NodeActivatedEvent.

Caches

  • All Jira caches must be considered invalid on the passive node.
  • When a node is being activated, all Jira's internal caches will be cleared. Jira will also interact with the AtlassianCache library and have it flush all the caches it manages.
  • If a plugin keeps any cached state, they should use the AtlassianCache (v2.0 or above) library to manage their caches. Plugins should use self loading caches wherever possible. The Atlassian Cache API is available via Maven as follows:
1
2
<dependency>
    <groupId>com.atlassian.cache</groupId>
    <artifactId>atlassian-cache-api</artifactId>
    <version>2.0.2</version>
</dependency>
  • Note: The Atlassian Cache API is a transitive dependency of jira-api, so most plugins will not need to specify it explicitly.

  • Plugins should use JiraPropertySetFactory.buildCachingPropertySet or JiraPropertySetFactory.buildCachingDefaultPropertySet as opposed to creating them directly with PropertySetManager.getInstance("cached", ...) to get a property set that is safe to use in a cluster.

Example

1
2
// Self loading cache using Atlassian Cache
    private final Cache<String, String> myLowerCaseCache;

// To get or create the cache. This would normally be in your component's constructor
    cache = cacheFactory.getCache(CachingAvatarStore.class.getName() + ".cache",
        new AvatarCacheLoader(),
        new CacheSettingsBuilder().expireAfterAccess(30, TimeUnit.MINUTES).build());    

// The loader class
    private class AvatarCacheLoader implements CacheLoader<Long, CacheObject<Avatar>>
    {
        @Override
        public CacheObject<Avatar> load(@NotNull final Long avatarId)
        {
            return new CacheObject<Avatar>(CachingAvatarStore.this.delegate.getById(avatarId));
        }
    }

// To retrieve an object from the cache
    cache.get(avatarId).getValue()

Atlassian Cache also provides a safe mechanism for building a LazyReference, which is in effect a cache of a single object. This can be very useful if you need to maintain a cache of all instances of something, especially if the number is known to be relatively small.

1
2
    CachedReference<AllProjectRoles> projectRoles;
   
// To get or create the reference. This would normally be in your components constructor
    projectRoles = cacheFactory.getCachedReference(getClass(), "projectRoles",
                new AllProjectRolesLoader());

// The Supplier
    class AllProjectRolesLoader implements Supplier<AllProjectRoles>
    {
        public AllProjectRoles get()
        {
            return new AllProjectRoles(delegate.getAllProjectRoles());
        }
    }

// To retrieve the object
    return projectRoles.get();
// When you add/update/remove a project role reset the reference. This will be propagated across the cluster.
    projectRoles.reset();

Notes:

  • In a clustered environment when you get a cache it may already have been created by another node in the cluster. When a plugin is upgraded the objects in the cache may belong to a different classloader or  different version of the class. To protect your plugin from various class incompatibility issues, you should clear the cache when your plugin is enabled (and disabled).
  • Caches should be named uniquely. In a plugin, you should probably use the plugin module key as the namespace for your caches.
  • Any object that needs to be replicated in a clustered environment must be serialisable. Additionally, the classes for these objects must be available from the application classpath. As such it is recommended that when this applies plugin developers use classes from the Java API (e.g. java.lang.String).
    • All the keys must be serialisable and on the classpath.
    • If you are using a self loading cache and the replication of entries is by invalidation (the default for this style of cache) then the values in the cache do not need to be serialisable nor on the classpath.
  • You can use the CacheSettings and its associated CacheSettingsBuilder to tailor the cache behaviour to your needs.
  • You should always try and set an expiry policy on the caches you create in order to avoid consuming excessive amounts of memory. For most applications a timed base expiry is most appropriate. It is usually very difficult to determine an appropriate number of cache entries that suits clients at all scales.

Backward compatability

If you wish to maintain a single version of your plugin that is compatible with previous versions of Jira as well as be cluster compatible in Jira 6.3 and above, you will need to use the atlassian-cache-compat library.

This library  provides the full capabilities of Atlassian Cache but is a little more difficult to incorporate into your plugin. A fully worked example is available on Bitbucket.

Cluster Locks

Sometimes a plugin wants to ensure that a given operation executes on only one node of the cluster at a time. Plugins can do this by acquiring a cluster-wide lock using Atlassian's "Beehive" API. For example:

1
2
package com.example.locking;

import com.atlassian.beehive.ClusterLockService;
import java.util.concurrent.locks.Lock;
import javax.annotation.Resource;


public class MyService {
 
    // Your lock should have a globally unique name, e.g. using fully qualified class name, as here
    private static final String LOCK_NAME = MyService.class.getName() + ".myLockedTask";
    
    @Resource  // or inject via constructor, etc.
    private ClusterLockService clusterLockService;
    
    public void doSomethingThatRequiresAClusterLock() {
        final Lock lock = clusterLockService.getLockForName(LOCK_NAME);
        lock.lock();
        try {
            // Do the thing that needs a lock
        }
        finally {
            lock.unlock();
        }
    }
}

Points to note:

  • Locks are appropriate for things such as:
    • Ensuring that a given operation runs only once on the cluster at a time
    • Doing work atomically, e.g. reading from the database then writing back some updated values
  • To maintain good performance, locks should not be used for highly-contended operations.
  • Locks should have static (i.e. constant) names. Make sure not to use dynamically generated lock names, such as "lock_for_task_" + taskId, because each lock is stored in the database permanently.
  • The locks provided by Beehive are now reentrant; code that requires reentrant locks should probably be refactored to use non-reentrant locks, but where this is not possible, it should behave correctly anyway.
  • If your needs to work with an Atlassian product/version that does not yet support clustering, then it needs to declare a dependency upon the com.atlassian.beehive:beehive-compat library, which provides a ClusterLockServiceFactory from which you can obtain the ClusterLockService. This service will provide JVM locks in an unclustered environment and cluster-wide locks in a clustered environment. If your will only ever be used with a product that exposes a ClusterLockService, you can inject that service directly without having to use the factory.

Cluster Messages

It is possible for a plugin to send and listen to short messages that are sent between nodes in a cluster. By specifying a 'channel', a plugin can register a ClusterMessageConsumer with the ClusterMessagingService. When a message is sent to that channel from another node all registered ClusterMessageConsumers will be called. A basic example is given below:

1
2
package com.example.messaging;

import com.atlassian.jira.cluster.ClusterMessageConsumer;
import com.atlassian.jira.cluster.ClusterMessagingService;

public class MyService {

    private static final String EXAMPLE_CHANNEL = "EXAMPLE";
    private final ClusterMessagingService clusterMessagingService;
    private final MessageConsumer messageConsumer;

    public MyService(final ClusterMessagingService clusterMessagingService) {
        this.clusterMessagingService = clusterMessagingService;
        messageConsumer = new MessageConsumer();
        clusterMessagingService.registerListener(EXAMPLE_CHANNEL, messageConsumer);
    }

    public void actionRequiringOtherNodesToBeNotified() {
        // Perform action
        clusterMessagingService.sendRemote(EXAMPLE_CHANNEL, "Action completed");
    }

    private static class MessageConsumer implements ClusterMessageConsumer {

        @Override
        public void receive(final String channel, final String message, final String senderId) {
            // Handle message
        }
    }
}

Points to note:

  • the channel name is limited to 20 alphanumeric characters.
  • the message body is limited to 200 characters.
  • registering a message consumer does not prevent it being garbage collected, it is the responsibility of the plugin to maintain a reference to it as long as it is needed.
  • if a node is unavailable or in the process of being restarted, it will not process any messages sent during the time it was offline.
  • while the id of the sender is exposed to the message consumer it is best to avoid relying on any particular node being available.

Events

At this time, there is no infrastructure supporting the activation and passivation of nodes and no  strong use case for this functionality has been identified. Generally, there should be no need for plugins to handle these events.

There are four events that will be issued in the following scenarios:

  • Activation, i.e. the plugin is running on a passive node that is being promoted to being the active node:
    • NodeActivatingEvent - issued at the start of the activation process
    • NodeActivatedEvent - issued at the end of the activation process. Plugins may listen to this if they have special handling to perform when a node becomes active.
  • Passivation, i.e. the plugin is running on the active node, and that node is being put into standby:
    • NodePassivatingEvent - issued at the start of the passivation process. Plugins should stop any background tasks they are performing when they receive this event.
    • NodePassivatedEvent - issued at the end of the passivation process.

For more information on listening for events, see Writing Jira event listeners with the atlassian-event library

Web Requests

No web requests should reach a plugin when the node is in the passive state. It is the responsibility of the load balancer configuration and Jira to block these.

General Guideline

Don't keep state! The less state a plugin holds onto, the more scalable it will be and the more easily it will adapt to a clustered environment.

Testing your plugin

When you think your plugin is ready, you can test it in a clustered environment by using our guide: Configuring a Jira Cluster for Plugin development.

Marking your plugin as cluster-compatible for the Marketplace

When you list your first cluster-compatible plugin version in the Marketplace, modify your atlassian-plugin.xml descriptor file. This tells the Marketplace and UPM that your plugin is Data Center (or cluster) compatible. Add the following parameters inside the plugin-info section:

1
2
<param name="atlassian-data-center-status">compatible</param>
<param name="atlassian-data-center-compatible">true</param> 
  • The atlassian-data-center-status parameter indicates to Marketplace and UPM that your app has been submitted for technical review according to these Data Center requirements.
  • The atlassian-data-center-compatible parameter was previously used to indicate Data Center compatibility and should be included for backward compatibility with older UPM versions.

Here's an example of a generic plugin-info block with these parameters:

1
2
<plugin-info>
    <description>${project.description}</description>
    <version>${project.version}</version>
    <vendor name="${project.organization.name}" url="${project.organization.url}" />
    <param name="atlassian-data-center-status">compatible</param>
    <param name="atlassian-data-center-compatible">true</param>
</plugin-info>

Rate this page: