Developing Data Center compatible add-ons for Crowd

Crowd provides an option to deploy in a clustered environment with Crowd Data Center.

Add-ons written for Crowd Server will largely "just work" in Crowd Data Center too. But for more advanced add-ons there are some features specific to a Data Center deployment that they may need to be aware of.

The information on this page will help you develop your add-on to run in a highly-available, clustered environment.

Cluster topology

A Crowd Data Center cluster has the following characteristics:

  • All Crowd instances share a single relational database instance.
  • Multiple instances may be active at the same time. 
    • New instances can be started and join the cluster at any time.
    • Existing instances can be terminated, and leave the cluster at any time.
  • A load balancer is configured to distribute requests between available nodes.
    • The load balancer directs all requests in the same HTTP session to the same node.
    • The load balancer handles requests from users, and from integrated applications.
  • Each node has access to its own local home directory, and to a shared home directory that is accessible by all nodes.

Plugin installation

Currently in Crowd, you can't install plugins during runtime. To install a plugin, you'll need to place it the plugins sub-directory of the shared home directory ($CROWD_HOME/shared/plugins), and have all nodes restarted.

All nodes must be running the same set of plugins, otherwise requests might be handled differently, depending which node they're routed to.

Shared home directory

You can retrieve the location of the local and shared home directories with com.atlassian.crowd.service.HomeDirectoryService.

Since all nodes in the cluster need to be able to access the shared home directory, it can be located on a remote file system with unspecified concurrency and locking semantics. In general, however, it shouldn't be used to store large amounts of data, or to communicate between the nodes.

Scheduled jobs

To execute a scheduled job, the plugin needs to register a job runner, and then schedule the actual job.

Registering a job runner

Job runners specify what should be executed as part of the job. They're registered locally, and not persisted. A job runner should then be registered when the plugin starts, and unregistered when the plugin is stopped.

To register a job runner, use com.atlassian.scheduler.SchedulerService#registerJobRunner.

Scheduling the actual job

Jobs can be scheduled either locally for one node, or globally for the whole cluster.

  • Local jobs are registered only for the node that made the call to scheduleJob(). They're not persisted, and will not be available after the node is restarted. To schedule a local job, use com.atlassian.scheduler.config.RunMode#RUN_LOCALLY.
  • Cluster-wide jobs are registered and unregistered globally for the whole cluster, the same applies to scheduling and unscheduling them. They're persisted until the configuration is removed - at the scheduled time, one node in the cluster will attempt to execute the job. If you schedule frequent, long-running jobs, another execution of this job might be started before the first one is finished. You should always check, at the plugin startup, whether the job is already registered, because trying to schedule a new one will overwrite the existing schedule. To schedule a cluster-wide job, use com.atlassian.scheduler.config.RunMode#RUN_ONCE_PER_CLUSTER

The following example shows a cluster-wide job.

1
2
class MyScheduledJobRunner implements JobRunner {
    private final SchedulerService schedulerService;
    
    // keys should be unique, usually you want to prefix them by your plugin key
    public static final JobRunnerKey JOB_RUNNER_KEY = JobRunnerKey.of("plugin.key.unique.jobrunner.id");
    public static final JobId JOB_ID = JobId.of("plugin.key.unique.job.id");

    @PostConstruct
    public void registerJob() throws SchedulerServiceException {
        // register the implementation to be called by the scheduler
        schedulerService.registerJobRunner(JOB_RUNNER_KEY, this);
        
        if (schedulerService.getJobDetails(JOB_ID) == null) {
            // schedule the job if it is not already scheduled
            schedulerService.scheduleJob(JOB_ID, JobConfig.forJobRunnerKey(JOB_RUNNER_KEY)
                    .withSchedule(Schedule.forCronExpression("0 0 0/4 * * ?"))
                    .withRunMode(RunMode.RUN_ONCE_PER_CLUSTER)
            );
        }
    }
    
    @PreDestroy
    public void unregisterJob() {
        // plugin is being disabled - no longer should execute this job (but other nodes might)
        schedulerService.unregisterJobRunner(JOB_RUNNER_KEY);
    }

    public JobRunnerResponse runJob(JobRunnerRequest request) {
        // this will be called by the scheduler
        return JobRunnerResponse.success("Great success!");
    }
}

Cluster-wide locks

Since Crowd nodes run in separate JVMs, the locking primitives provided by Java will not work across the nodes. To acquire a cluster-wide lock, you can use com.atlassian.beehive.ClusterLockService#getLockForName instead.

Cluster-wide locks have significantly larger overhead than the JVM-local locks, and shouldn't be used for highly contended operations. These locks are also reentrant. If a node is terminated or stops responding while holding a cluster-wide lock, eventually other nodes will be able to reacquire the lock.

1
2
class MyClusterLock
    private final ClusterLockService clusterLockService;

    public void oncePerCluster() {
        final ClusterLock clusterLock = clusterLockService.getLockForName("plugin.key.unique.lock.name");
        if (clusterLock.tryLock()) {
            try {
                // this will be executed by a single node at the same time
            } finally {
                clusterLock.unlock();
            }
        } else {
            // some other thread (either on this node or on a different one) is already holding the lock
        }
    }
}

Caching in a cluster

Crowd Data Center supports only local cache on each node. Replicated cache is not supported.

All caches created with com.atlassian.cache.CacheManager will be created as local, regardless of the settings.

Cluster messages and communication between nodes

In general, the Crowd Data Center nodes should act independently of one another, and shouldn't rely on being able to communicate with other nodes, or on other nodes being present at all.

You can listen to cluster-wide messages with com.atlassian.crowd.service.cluster.ClusterMessageService,** **but it's not intended for large, or high-volume communication.

Listeners registered on the specified channel (ClusterMessageService#registerListener) will be notified when any node calls ClusterMessageService#publish() with this channel. The body of the message will be passed to the implementation of ClusterMessageListener#handleMessage. The node that publishes the message will not be notified of it.

If a node is unavailable or being restarted, it will not receive any cluster-wide messages.

1
2
class MyClusterMessages implements ClusterMessageListener
    private final ClusterMessageService clusterMessageService;
    private final static String CLUSTER_MESSAGE_CHANNEL = "plugin.key.unique.channel";

    @PostConstruct
    public void registerListener() {
        // register this to receive messages sent to the channel
        clusterMessageService.registerListener(this, CLUSTER_MESSAGE_CHANNEL);
    }

    @PreDestroy
    public void unregisterListener() {
        clusterMessageService.unregisterListener(this);
    }

    public void notifyOtherNodes() {
        // publishes the message to other cluster nodes
        clusterMessageService.publish(CLUSTER_MESSAGE_CHANNEL, "hey");
    }

    @Override
    public void handleMessage(String channel, String message) {
        // this will be called on other nodes once they receive the message
        // (but not on the node that published it)
    }
}

Event handling

Crowd doesn't support cluster-wide events. All events published in Crowd (e.g. UserCreatedEvent) are handled only on the node that raised them.

Rate this page: