At this time, there is no infrastructure supporting the activation and passivation of nodes and no strong use case for this functionality has been identified. Generally, there should be no need for plugins to handle these events.
Jira provides a deployment option with clustering via Jira Data Center. The information on this page will help you develop your plugin in a high availability, clustered environment.
The cluster topology is for multiple active servers with the following characteristics:
Simplified architectural diagram.
Stateless plugins need not concern themselves with the fact they are in a clustered environment. For example, a plugin that only provides a custom field type need not be modified.
In a clustered environment Jira supports a local home and a shared home.The following table shows what is stored in each.
Local Home |
|
Shared Home |
|
The method JiraHome.getHome()
returns the shared home. A new method getLocalHome()
returns the local home. Similarly JiraHome.getHomePath()
returns the path of the shared home and getLocalHomePath()
returns the path to the local home.
Plugins need to decide which is the most appropriate place to store any data they place on the file system, however the shared home should be the correct place in most scenarios. Keep in mind that the underlying file system will generally be on a network and may have unusual semantics, particularly with regard to file locking, file renaming, and temporary files.
If a plugin wishes to use a scheduled task, it should do so using the Atlassian scheduler API, which will be paused on any passive nodes and activated when those nodes become active.
The previously available PluginScheduler is deprecated and should not be used for future work except where backward compatible local scheduling is required.. Also note that any jobs scheduled via the PluginScheduler will be run on all Jira instances in the cluster.
The Atlassian Scheduler API is available via Maven as follows:
Maven Coordinates
1 2<dependency> <groupId>com.atlassian.scheduler</groupId> <artifactId>atlassian-scheduler-api</artifactId> <version>1.0</version> </dependency>
Note: The Atlassian Scheduler API is a transitive dependency of jira-api
, so it should not be necessary to depend on it directly.
A plugin needs to do two things to schedule a job:
A Job Runner is an instance of a class that implements JobRunner
and that is used to perform the actual work required. The scheduler will call the runJob(final JobRunnerRequest jobRunnerRequest)
method, passing a JobRunnerRequest
object that will contain any parameters supplied when the job was scheduled. The Job Runner is registered with a key that must be unique to the JobRunner, but any number of jobs can be scheduled to use the same JobRunner. It is suggested you use the plugin module key to namespace your JobRunner key.
Example
1 2schedulerService.registerJobRunner(JobRunnerKey.of("com.example.schedulingplugin.JOBXX"), new SendFilterJob());
Plugin registration is transient and plugins should register and unregister their JobRunners on "plugin enabled" and "plugin disabled" events respectively.
Jobs can be scheduled to run in one of two ways:
Once across the whole cluster. Jobs scheduled to run across the whole cluster:
Locally. Jobs scheduled to run locally:
You may pass parameters to the scheduler when scheduling a job. These parameters must be Serializable
, and while you may include classes that are defined by your plugin, this may make it more difficult for your plugin to update a pre-existing schedule if the object's serialized form has changed. It is safest to store simple data types, like Lists, Maps, Strings, and Longs.
1 2Schedule schedule = Schedule.forCronExpression(cronExpression); JobConfig jobConfig = JobConfig.forJobRunnerKey(JobRunnerKey.of("com.example.schedulingplugin.JOBXX")) .withSchedule(schedule) .withParameters(ImmutableMap.<String, Serializable>of("SUBSCRIPTION_ID", subscriptionId)); JobId jobId = JobId.of("com.atlassian.example.schedulingplugin:" + subId); schedulerService.scheduleJob(jobId, jobConfig);
You can either create your own JobId or let the system dynamically create one for you. To check the status of a job or unschedule a job, you will need the JobId.
A detailed example of using the SchedulerService is available on BitBucket.
Backward compatability
If you wish to maintain a single version of your plugin that is compatible with previous versions of Jira as well as be cluster compatible in Jira 6.3 and above, you will need to use the atlassian-scheduler-compat
library.
This library does not provide the full capabilities of Atlassian Scheduler but does provide functional equivalence with the facilities in Jira prior to version 6.3. A fully worked example is available on Bitbucket.
Common Problems
PluginScheduler
created Services to represent plugin jobs. This association is broken by these changes and neither the atlassian-scheduler
API nor SAL's PluginScheduler
will continue to do this. Plugin developers may have previously instructed Jira administrators to consult the Services configuration page to verify the plugin installation or change the interval at which jobs run. The plugin jobs are no longer listed there, so this documentation needs to be updated to reference the Scheduler Details page, instead.atlassian-scheduler
API.If a plugin uses an Executor to run recurring tasks, it should be sure to shut down the Executor on a NodePassivatingEvent and restart it on a NodeActivatedEvent.
1 2<dependency> <groupId>com.atlassian.cache</groupId> <artifactId>atlassian-cache-api</artifactId> <version>2.0.2</version> </dependency>
Note: The Atlassian Cache API is a transitive dependency of jira-api
, so most plugins will not need to specify it explicitly.
Plugins should use JiraPropertySetFactory.buildCachingPropertySet
or JiraPropertySetFactory.buildCachingDefaultPropertySet
as opposed to creating them directly with PropertySetManager.getInstance("cached", ...)
to get a property set that is safe to use in a cluster.
Example
1 2// Self loading cache using Atlassian Cache private final Cache<String, String> myLowerCaseCache; // To get or create the cache. This would normally be in your component's constructor cache = cacheFactory.getCache(CachingAvatarStore.class.getName() + ".cache", new AvatarCacheLoader(), new CacheSettingsBuilder().expireAfterAccess(30, TimeUnit.MINUTES).build()); // The loader class private class AvatarCacheLoader implements CacheLoader<Long, CacheObject<Avatar>> { @Override public CacheObject<Avatar> load(@NotNull final Long avatarId) { return new CacheObject<Avatar>(CachingAvatarStore.this.delegate.getById(avatarId)); } } // To retrieve an object from the cache cache.get(avatarId).getValue()
Atlassian Cache also provides a safe mechanism for building a LazyReference, which is in effect a cache of a single object. This can be very useful if you need to maintain a cache of all instances of something, especially if the number is known to be relatively small.
1 2CachedReference<AllProjectRoles> projectRoles; // To get or create the reference. This would normally be in your components constructor projectRoles = cacheFactory.getCachedReference(getClass(), "projectRoles", new AllProjectRolesLoader()); // The Supplier class AllProjectRolesLoader implements Supplier<AllProjectRoles> { public AllProjectRoles get() { return new AllProjectRoles(delegate.getAllProjectRoles()); } } // To retrieve the object return projectRoles.get(); // When you add/update/remove a project role reset the reference. This will be propagated across the cluster. projectRoles.reset();
Notes:
java.lang.String
).
Backward compatability
If you wish to maintain a single version of your plugin that is compatible with previous versions of Jira as well as be cluster compatible in Jira 6.3 and above, you will need to use the atlassian-cache-compat library.
This library provides the full capabilities of Atlassian Cache but is a little more difficult to incorporate into your plugin. A fully worked example is available on Bitbucket.
Sometimes a plugin wants to ensure that a given operation executes on only one node of the cluster at a time. Plugins can do this by acquiring a cluster-wide lock using Atlassian's "Beehive" API. For example:
1 2package com.example.locking; import com.atlassian.beehive.ClusterLockService; import java.util.concurrent.locks.Lock; import javax.annotation.Resource; public class MyService { // Your lock should have a globally unique name, e.g. using fully qualified class name, as here private static final String LOCK_NAME = MyService.class.getName() + ".myLockedTask"; @Resource // or inject via constructor, etc. private ClusterLockService clusterLockService; public void doSomethingThatRequiresAClusterLock() { final Lock lock = clusterLockService.getLockForName(LOCK_NAME); lock.lock(); try { // Do the thing that needs a lock } finally { lock.unlock(); } } }
Points to note:
"lock_for_task_" + taskId
, because each lock is stored in the database permanently.com.atlassian.beehive:beehive-compat
library, which provides a ClusterLockServiceFactory
from which you can obtain the ClusterLockService
. This service will provide JVM locks in an unclustered environment and cluster-wide locks in a clustered environment. If your will only ever be used with a product that exposes a ClusterLockService
, you can inject that service directly without having to use the factory.Backward compatability
If you wish to maintain a single version of your that is compatible with previous versions of Jira as well as be cluster compatible in Jira 6.3 and above, you will need to use the beehive-compat
library.
This library provides the full capabilities of Atlassian Beehive but is a little more difficult to incorporate into your . A fully worked example is available on Bitbucket.
It is possible for a plugin to send and listen to short messages that are sent between nodes in a cluster. By specifying a 'channel', a plugin can register a ClusterMessageConsumer
with the ClusterMessagingService
. When a message is sent to that channel from another node all registered ClusterMessageConsumer
s will be called. A basic example is given below:
1 2package com.example.messaging; import com.atlassian.jira.cluster.ClusterMessageConsumer; import com.atlassian.jira.cluster.ClusterMessagingService; public class MyService { private static final String EXAMPLE_CHANNEL = "EXAMPLE"; private final ClusterMessagingService clusterMessagingService; private final MessageConsumer messageConsumer; public MyService(final ClusterMessagingService clusterMessagingService) { this.clusterMessagingService = clusterMessagingService; messageConsumer = new MessageConsumer(); clusterMessagingService.registerListener(EXAMPLE_CHANNEL, messageConsumer); } public void actionRequiringOtherNodesToBeNotified() { // Perform action clusterMessagingService.sendRemote(EXAMPLE_CHANNEL, "Action completed"); } private static class MessageConsumer implements ClusterMessageConsumer { @Override public void receive(final String channel, final String message, final String senderId) { // Handle message } } }
Points to note:
At this time, there is no infrastructure supporting the activation and passivation of nodes and no strong use case for this functionality has been identified. Generally, there should be no need for plugins to handle these events.
There are four events that will be issued in the following scenarios:
NodeActivatingEvent
- issued at the start of the activation processNodeActivatedEvent
- issued at the end of the activation process. Plugins may listen to this if they have special handling to perform when a node becomes active.NodePassivatingEvent
- issued at the start of the passivation process. Plugins should stop any background tasks they are performing when they receive this event.NodePassivatedEvent
- issued at the end of the passivation process.For more information on listening for events, see Writing Jira event listeners with the atlassian-event library
No web requests should reach a plugin when the node is in the passive state. It is the responsibility of the load balancer configuration and Jira to block these.
Don't keep state! The less state a plugin holds onto, the more scalable it will be and the more easily it will adapt to a clustered environment.
When you think your plugin is ready, you can test it in a clustered environment by using our guide: Configuring a Jira Cluster for Plugin development.
When you list your first cluster-compatible plugin version in the Marketplace, modify your atlassian-plugin.xml
descriptor file. This tells the Marketplace and UPM that your plugin is Data Center (or cluster) compatible. Add the following parameters inside the plugin-info
section:
1 2<param name="atlassian-data-center-status">compatible</param> <param name="atlassian-data-center-compatible">true</param>
atlassian-data-center-status
parameter indicates to Marketplace and UPM that your app has been submitted for technical review according to these Data Center requirements.atlassian-data-center-compatible
parameter was previously used to indicate Data Center compatibility and should be included for backward compatibility with older UPM versions.Here's an example of a generic plugin-info
block with these parameters:
1 2<plugin-info> <description>${project.description}</description> <version>${project.version}</version> <vendor name="${project.organization.name}" url="${project.organization.url}" /> <param name="atlassian-data-center-status">compatible</param> <param name="atlassian-data-center-compatible">true</param> </plugin-info>
Rate this page: