Bitbucket Data Center instances run the same code as Bitbucket Data Center instances, so plugins written for Bitbucket Data Center will largely "just work" in the Data Center product too. But for more advanced plugins, there are a few features of the Data Center product that they may need to be aware of:
Clustering: in a Data Center instance the Bitbucket web application, including all plugins, runs in multiple JVMs on different machines (cluster nodes). This has a number of consequences for plugins, not least of which is that a plugin cannot simply store state in memory and expect it to be equally visible on all cluster nodes. Testing your plugin in a cluster is also more involved and requires some additional configuration.
Mirroring: a Data Center instance may consist of one or more mirror node(s) in addition to the primary ("upstream") instance. Plugins cannot be installed on mirrors directly, but if they make changes to repository state on the upstream they must follow a few simple rules so that any mirror(s) can keep in sync with the changes.
Disaster Recovery: a Data Center instance may be replicated at the database and file system level to a standby instance, that can take over from the primary instance in a disaster scenario. Plugins are included in the state that's replicated to the standby automatically, so most plugins do not even need to be aware that all this may be happening. But it does mean that plugins should be able to tolerate situations where the database and home directory are slightly inconsistent with each other. And if they maintain external indexes or other state that is not included in the database and shared home directory they may also want to consider hooking into the event fired on a failover to rebuild those indexes if they are not automatically self-healing.
Bitbucket Data Center has a local home and a shared home for all instances, not just clustered instances. This is intended to
make it simpler for plugin developers to write their plugins, knowing that BITBUCKET_HOME
will be laid out consistently
on standalone and clustered instances. The home directory is laid out as follows:
1 2BITBUCKET_HOME |-- bin |-- caches |-- export |-- lib |-- log |-- shared (BITBUCKET_SHARED_HOME) | |-- config | |-- data | | |-- attachments | | |-- avatars | | |-- repositories | |-- plugins | | |-- installed-plugins | |-- bitbucket.properties |-- tmp
BITBUCKET_SHARED_HOME
, by default, is BITBUCKET_HOME/shared
. Plugin
developers should not rely on this, however; the location
of BITBUCKET_SHARED_HOME
can be overridden using environment variables or
system properties. Instead, plugin developers should use:
ApplicationPropertiesService.getHomeDir()
=> BITBUCKET_HOME
ApplicationPropertiesService.getSharedHomeDir()
=> BITBUCKET_SHARED_HOME
In a clustered environment, BITBUCKET_SHARED_HOME
is guaranteed to be the
same filesystem on every node, allowing data that is stored there to be
accessed by all nodes.
Warning:
BITBUCKET_SHARED_HOME
will generally be a network mount, such as an NFS partition. This imposes some special considerations:
Where possible, plugins should minimize their use of the filesystem if
they need to use BITBUCKET_SHARED_HOME
.
You can ensure a compatible version of shared libraries, like Atlassian Beehive, Atlassian Cache and Atlassian Scheduler, is used by importing Bitbucket Data Center's parent POM, like this:
1 2<dependencyManagement> <dependencies> <dependency> <groupId>com.atlassian.bitbucket.server</groupId> <artifactId>bitbucket-parent</artifactId> <version>${bitbucket.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
Where bitbucket.version
is defined as the minimum version of Bitbucket Data Center you
want your plugin to support.
Communication among cluster nodes is facilitated by Java Serialization.
This means, when using distributed types such as replicateViaCopy()
Atlassian
Cache Cache
s, job data in Atlassian Scheduler (even if you're
using RunMode.RUN_LOCALLY
!), or the BucketedExecutor
, the objects
you use must be Serializable
. Externalizable
extends Serializable
and is also supported.
Most Bitbucket Data Center types, such as Project
and Repository
,
are not Serializable
and cannot be used directly in these contexts.
Instead, their IDs (Project.getId()
, Repository.getId()
, the pair
of [PullRequest.getToRef().getRepository().getId(), PullRequest.getId()]
,
etc.) should be serialized and then their respective services
(ProjectService.getById(int)
,
RepositoryService.getById(int)
, PullRequestService.getById(int, long)
,
etc.) should be used to re-retrieve the full objects as necessary. The
services themselves are also not Serializable
.
Because objects must be re-retrieved prior to processing, plugin code should account for the fact that the state of the objects may have changed:
getById(int)
calls
should appropriately handle null
returnsThis is not intended as an exhaustive list--rather to promote good programming practice and more robust processing. If any of the exact state of the object at serialization time matters, the relevant state should be extracted and included in the object's serialized form. This should be kept to a minimum, however, to keep the serialized representation of objects as small as possible. Large serialized blobs have impacts both on Bitbucket Data Center's memory footprint and on the efficiency of inter-node communication.
Bitbucket Data Center standalone instances are considered one-node clusters.
That means the same Serializable
rules apply regardless of whether
multiple nodes are actually present. This is intended to make plugin developer's
lives simpler. Bitbucket Data Center behaves consistently both clustered and standalone,
so plugins written for a cluster will work correctly standalone.
In simple plugins, it is common to cache data using ConcurrentMap
s or
Guava Cache
s. This caching will not work correctly in a cluster
because updating the data on one node will leave stale data in the
caches on other nodes.
Plugins should use Atlassian Cache, an API provided by Bitbucket Data Center for plugins. You can add Atlassian Cache to your plugin with the following Maven dependency:
1 2<dependency> <groupId>com.atlassian.cache</groupId> <artifactId>atlassian-cache-api</artifactId> <scope>provided</scope> </dependency>
To use Atlassian Cache, you:
<component-import key="cacheFactory" interface="com.atlassian.cache.CacheFactory"/>
in atlassian-plugin.xml
CacheFactory
to the relevant component's constructorCacheSettings
(created through the CacheSettingsBuilder
class) to control
many aspects of how the cache works.1 2cacheFactory.getCache("com.example.plugin:example-plugin-key:Example Cache", new CacheLoader<String, String>() { @Nonnull @Override public String load(@Nonnull String key) { return "Value"; } } );
Cache
once, in your constructor, and use the same instance
afterward - continuously re-fetching the cache from the CacheFactory
is inefficientIf you are using a replicateViaCopy()
cache, your keys and values must be Serializable
.
Externalizable
extends Serializable
and is also acceptable.
Without any intervention, scheduled tasks will execute independently on each Bitbucket Data Center instance in a cluster. In some circumstances, this is desirable behavior. In other situations, you will need to use cluster-wide locking to ensure that jobs are only executed once per cluster. This is accomplished by using Atlassian Scheduler.
You can add Atlassian Scheduler to your plugin with the following Maven dependency:
1 2<dependency> <groupId>com.atlassian.scheduler</groupId> <artifactId>atlassian-scheduler-api</artifactId> <scope>provided</scope> </dependency>
To use Atlassian Scheduler you:
<component-import key="schedulerService" interface="com.atlassian.scheduler.SchedulerService"/>
to your atlassian-plugin.xml
SchedulerService
to the relevant component's constructorJobRunner
JobConfig
which describes:
JobRunner
during shutdownA JobRunner
handles JobRunnerRequest
s and performs the actual processing. Generally each node in a cluster will register its
own JobRunner
. This allows all of the nodes in the cluster to run the job, allowing the cluster to more efficiently distribute load.
1 2public class MyJobRunner implements JobRunner { @Override public JobRunnerResponse runJob(JobRunnerRequest request) { //Do some meaningful work return JobRunnerResponse.success(); } } schedulerService.registerJobRunner("com.example.plugin:example-plugin-key:ExampleJobRunner", new MyJobRunner());
When a job is scheduled, the key assigned to the JobRunner
when it is
registered is used to associate the job with its runner:
1 2schedulerService.scheduleJob( JobId.of("com.example.plugin:example-plugin-key:ExampleJob"), JobConfig.forJobRunnerKey("com.example.plugin:example-plugin-key:ExampleJobRunner") .withRunMode(RunMode.RUN_ONCE_PER_CLUSTER) .withSchedule(Schedule.forInterval(intervalInMillis, new Date(System.currentTimeMillis() + intervalInMillis))));
During application shutdown, you should unregister the JobRunner
so
that the node shutting down is no longer considered a candidate for
running the job:
1 2schedulerService.unregisterJobRunner("com.example.plugin:example-plugin-key:ExampleJobRunner");
The easiest way to put together the register and unregister lifecycle is
to use the Atlassian SAL LifecycleAware
interface on your component:
1 2public class ExampleComponent implements LifecycleAware { private static final JobId JOB_ID = JobId.of("com.example.plugin:example-plugin-key:ExampleJob"); private static final long JOB_INTERVAL = TimeUnit.MINUTES.toMillis(30L); private static final String JOB_RUNNER_KEY = "com.example.plugin:example-plugin-key:ExampleJobRunner"; private final SchedulerService scheduler; public ExampleComponent(SchedulerService schedulerService) { this.schedulerService = schedulerService; } @Override public void onStart() throws SchedulerServiceException { //The JobRunner could be another component injected in the constructor, a //private nested class, etc. It just needs to implement JobRunner schedulerService.registerJobRunner(JOB_RUNNER_KEY, new MyJobRunner()); schedulerService.scheduleJob(JOB_ID, JobConfig.forJobRunnerKey(JOB_RUNNER_KEY) .withRunMode(RunMode.RUN_ONCE_PER_CLUSTER) .withSchedule(Schedule.forInterval(JOB_INTERVAL, new Date(System.currentTimeMillis() + JOB_INTERVAL)))); } @Override public void onStop() { schedulerService.unregisterJobRunner(JOB_RUNNER_KEY); } }
Note:
In order to use LifecycleAware
you need to add the following
dependency to your plugin:
1 2<dependency> <groupId>com.atlassian.sal</groupId> <artifactId>sal-api</artifactId> <scope>provided</scope> </dependency>
Job data provided in JobConfig
is required to be Serializable
, regardless of RunMode
. The backing store for job
data may serialize objects even for RunMode.RUN_LOCALLY
jobs.
Warning:
Generally you should not unregister the job itself. When you unregister a job, that job is unregistered across the cluster, not just on the node shutting down.
When multiple nodes schedule the same job but with a different schedule (even differing by milliseconds) then the last registration will win and replace the old job configuration and schedule. If the schedule is eligible to run immediately and multiple nodes take this action at close to the same time, then the job might run more than once as the instances replace one another.
Java's locking primitives, like Lock
, synchronized
, etc., only apply to a single JVM and will not properly serialize
operations in a cluster. Instead, you need to use the cluster-wide lock. This is accomplished by using:
LockService
, part of the bitbucket-api
module
LockService
is specific to Bitbucket Data CenterYou can add Atlassian Beehive to your plugin with the following Maven dependency:
1 2<dependency> <groupId>com.atlassian.beehive</groupId> <artifactId>beehive-api</artifactId> <scope>provided</scope> </dependency>
To use Atlassian Beehive's ClusterLockService
you:
<component-import key="clusterLockService" interface="com.atlassian.beehive.ClusterLockService"/>
to your atlassian-plugin.xml
ClusterLockService
to the relevant component's constructorClusterLock
(which extends the standard Java Lock
interface):1 2public class ExampleComponent { private final ClusterLock taskLock; public ExampleComponent(ClusterLockService lockService) { taskLock = lockService.getLockForName(getClass().getName() + ":TaskLock"); } public void performTask() { if (taskLock.tryLock()) { try { //Do something, knowing no other node in the cluster is accessing //whatever resource you're protecting } finally { taskLock.unlock(); } } else { //Another node in the cluster holds the lock already } } }
LockService
You can use the LockService
by adding a dependency on bitbucket-api
,
generally already a dependency of any Bitbucket Data Center plugin:
1 2<dependency> <groupId>com.atlassian.bitbucket.server</groupId> <artifactId>bitbucket-api</artifactId> <scope>provided</scope> </dependency>
The LockService
is used in a similar way to Atlassian
Beehive's ClusterLockService
:
<component-import key="lockService" interface="com.atlassian.bitbucket.concurrent.LockService"/>
to atlassian-plugin.xml
LockService
to the relevant component's constructorLock
:1 2public class ExampleComponent { private final Lock taskLock; public ExampleComponent(LockService lockService) { taskLock = lockService.getLock(getClass().getName() + ":TaskLock"); } public void performTask() { if (taskLock.tryLock()) { try { //Do something, knowing no other node in the cluster is accessing //whatever resource you're protecting } finally { taskLock.unlock(); } } else { //Another node in the cluster holds the lock already } } }
In addition to Lock
s, the LockService
provides access to more
specialized RepositoryLock
s and PullRequestLock
s.
RepositoryLock
allows concurrent operations on
different Repository
instances, but serializes operations on the
same instancePullRequestLock
allows concurrent operations on
different PullRequest
instances, but serializes operations on the
same instanceThese locks can be used to reduce contention, by allowing concurrent
operations on different instances, while still ensuring each instance is
acted on serially. These locks are cluster-safe, meaning only one node
in the cluster will operate on a given Repository
or PullRequest
at
once.
Note:
ClusteredLock
, Lock
, PullRequestLock
and RepositoryLock
are not Serializable
and cannot be transferred between nodes.
RepositoryLock
and PullRequestLock
are namespaced. The
same Repository
or PullRequest
can be locked simultaneously in
multiple RepositoryLock
or PullRequestLock
instances, respectively,
which have different names.
It is not possible, from a plugin, to access the locks the host application uses to protect its own processing. They are intentionally stored in an unreachable namespace.
ExecutorService
s are useful for managing threaded jobs. Bitbucket Data Center provides
a ScheduledExecutorService
which can be imported by plugins to use a
standard thread pool. However, ExecutorService
s are local to the node
where they are created. In a cluster, to efficiently distribute
processing, it is sometimes desirable to allow scheduling a task on one
node and processing it on another. To facilitate this, Bitbucket Data Center provides
a BucketedExecutor
in bitbucket-api
, which is generally a dependency of
any Bitbucket Data Center plugin.
1 2<dependency> <groupId>com.atlassian.bitbucket.server</groupId> <artifactId>bitbucket-api</artifactId> <scope>provided</scope> </dependency>
To use the BucketedExecutor
you:
<component-import key="concurrencyService" interface="com.atlassian.bitbucket.concurrent.ConcurrencyService"/>
to atlassian-plugin.xml
ConcurrentService
to the relevant component's constructorBucketedExecutor
:1 2public class MyTaskRequest implements Serializable { //Repository is not Serializable private final int repositoryId; public MyTaskRequest(Repository repository) { repositoryId = repository.getId(); } public int getRepositoryId() { return repositoryId; } } Function<MyTaskRequest, String> bucketFunction = new Function<MyTaskRequest, String>() { @Override public String apply(MyTaskRequest task) { return String.valueOf(task.getRepositoryId()); } } BucketProcessor<MyTaskRequest> processor = new BucketProcessor<MyTaskRequest>() { @Override public void process(@Nonnull String bucketId, @Nonnull List<MyTaskRequest> tasks) { for (MyTaskRequest task : tasks) { Repository repository = repositoryService.getById(task.getRepositoryId()); if (repository == null) { log.info("Repository {} was deleted", task.getRepositoryId()); continue; } //Do some processing } } } BucketedExecutor<MyTaskRequest> executor = concurrencyService.getBucketedExecutor( "com.example.plugin:example-plugin-key:ExampleBucketedExecutor", new BucketedExecutorSettings.Builder<>(bucketFunction, processor) //How many tasks to process at once? Integer.MAX_VALUE processes the //whole bucket, 1 will receive one task at a time .batchSize(Integer.MAX_VALUE) //How many retries, if processing fails? After the retries are //exhausted, the requests that failed will be discarded .maxAttempts(1) //How many threads can process tasks (from different buckets) at the //same time? Concurrency can be PER_NODE or PER_CLUSTER .maxConcurrency(config.getThreadCount(), ConcurrencyPolicy.PER_CLUSTER) .build());
Each BucketedExecutor
is given a Guava Function
which is used to
divide the buckets. The plugin developer is free to define buckets as
coarse or fine as desired. The BucketedExecutor
offers two very useful
guarantees:
BucketProcessor
in the same
order they were submitted inBucketProcessor
s generally do not require
locking, if the buckets are well-definedWarning:
The task type used to specialize the BucketedExecutor
generic must
be Serializable
. Even standalone instances (which are considered
one-node clusters) will serialize the tasks as they are submitted, prior
to invoking the BucketProcessor
.
An bucket's concurrency policy can be either ConcurrencyPolicy.PER_CLUSTER
or ConcurrencyPolicy.PER_NODE
. PER_CLUSTER
is used if you need
to throttle concurrency because of a global resource (e.g. a remote service or
shared file system). PER_NODE
is used if you need to throttle concurrency
because of a local resource (e.g. CPU or memory on the node)
When ConcurrencyPolicy.PER_CLUSTER
is used, the concurrency limit is
divided by number nodes in the cluster to determine how many buckets
each node can process concurrently. The result is rounded up, such
that every node in the cluster is always allowed to process at least one
bucket.
maxConcurrency(2, ConcurrencyPolicy.PER_CLUSTER)
in a three-node
cluster behaves like maxConcurrency(1, ConcurrencyPolicy.PER_NODE)
- 2/3 = .667 ~ 1 per nodemaxConcurrency(3, ConcurrencyPolicy.PER_CLUSTER)
in a two-node
cluster behaves like maxConcurrency(2, ConcurrencyPolicy.PER_NODE)
- 3/2 = 1.5 ~ 2 per nodeBitbucket Data Center does not offer cluster-wide events. Events, such as
RepositoryPushEvent
, are handled only on the node that raised them. In
other words, whichever node processed the push will be the only node
that processes events for that push. This is an intentional design
decision. The development team feels that this makes implementing a
clustered plugin simpler, because plugin developers are not required
to prevent re-processing the same event on each node.
Installation for the Bitbucket Data Center cluster administrator is the same as with a
single instance. Uploading a plugin through the web interface will store
the plugin in BITBUCKET_SHARED_HOME
and ensure that it
is installed on all instances of the cluster.
Currently, cluster instances must be homogeneous. However, future plans may introduce support for rolling upgrades and other features that introduce disparities, whether temporary or permanent, between cluster nodes. Plugin developers can assume all nodes will:
For the best forward compatibility, plugin developers should not assume all nodes are running the same version of Bitbucket Data Center.
It is important to test your plugin in a cluster. When running Bitbucket Data Center via the Atlassian SDK (AMPS) a clustered license is used, so multiple instances started via the Atlassian SDK can be clustered.
Alternatively, you can install the following timebombed license, which is cluster-enabled. This license is only valid for 3 hours, after which you will be unable to push to Bitbucket Data Center without restarting the servers:
1 2AAABAA0ODAoPeNptUE1Lw0AUvPdXLHhOybZGsbCgpiE2aJrSDZ6 f8dUuJJvy3ibYf2+M22LF63wxM1ev+C6yzgopRXi3iORiFopkq8 UslNcTdsD7adJD3YEzrVU7qBk9HBOO4BIcqm95EN4EUp7Y1jqoX A4NqgdXA7MBKzSyQ/KSFzDWoQVbYfJ5MHQck4r5k+cHu+lROerw MjQZnLVyY9Y9nMKnVdt43bOp0DLq4wHHAnpYtMrTyRapR1ot1WN WboLNPNZBWmZRoKPb1FsLIGeRRpuH1vQB1vDPA+ctnhw6Q4zDDv pdNO+aN6T1rmQkVnJ22evftUVH1R4Y/975BcASkF4wLQIVAJHuX Zz1SsymUm2B5V7p7Pap48xzAhROyzM1l9a1OqcWzxseRNmnZ4Xq mQ==X02d9
The two easiest way to start a cluster of Bitbucket Data Center nodes are:
atlas-run
to spin up a cluster which you can use to iteratively develop your plugin, deploy it and test its cluster safetyBoth methods require Maven configuration but the same can be used for both.
To configure an N-node cluster you must specify N <product/>
elements, one per node. Each node will need slightly
different configuration to ensure it can independently start up (e.g. so each listens on different ports if running
on the same machine), find other nodes in the cluster and finally join the cluster.
Specifically, each node will need:
httpPort
element)plugin.ssh.port
entry in the systemPropertyVariables
element)All nodes will need to share:
BITBUCKET_SHARED_HOME
directory (supplied through the bitbucket.shared.home
entry in the systemPropertyVariables
element)jdbc.*
entries in the systemPropertyVariables
element)Each node will also need a way to find other nodes. This is supplied through the hazelcast.network.tcpip
entry of
the systemPropertyVariables
element for TCP/IP and/or the hazelcast.network.multicast
entry in the systemPropertyVariables
element
for IP multicast. Without one of these settings set to true (both are false by default) a node will never look for other
nodes and thus never join a cluster.
The following Maven pom.xml configuration will start up a cluster of two Bitbucket Data Center nodes. Node 1 uses
port 7991 for HTTP and 7997 for SSH and node 2 uses port 7992 for HTTP and 7998 for SSH. Both nodes use TCP/IP to find
each other and use the default TCP/IP settings. They both use a BITBUCKET_SHARED_HOME
of
${project.basedir}/target/bitbucket-node-1/home/shared
and connect to the same MySQL database called bitbucket
. Also note that
because they are connecting to a MySQL database the MySQL JDBC driver jar must be made available to Bitbucket Data Center. This is
achieved through the libArtifact
entry for mysql:mysql-connector-java
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98<build> <plugins> <plugin> <groupId>com.atlassian.maven.plugins</groupId> <artifactId>bitbucket-maven-plugin</artifactId> <version>${amps.version}</version> <extensions>true</extensions> <configuration> <products> <!-- Node 1 --> <product> <id>bitbucket</id> <instanceId>bitbucket-node-1</instanceId> <version>${bitbucket.version}</version> <dataVersion>${bitbucket.data.version}</dataVersion> <!-- override the HTTP port used for this node --> <httpPort>7991</httpPort> <systemPropertyVariables> <bitbucket.shared.home>${project.basedir}/target/bitbucket-node-1/home/shared</bitbucket.shared.home> <!-- override the SSH port used for this node --> <plugin.ssh.port>7997</plugin.ssh.port> <!-- override database settings so both nodes use a single database --> <jdbc.driver>com.mysql.jdbc.Driver</jdbc.driver> <jdbc.url>jdbc:mysql://localhost:3306/bitbucket?characterEncoding=utf8&useUnicode=true&sessionVariables=storage_engine%3DInnoDB</jdbc.url> <jdbc.user>bitbucketuser</jdbc.user> <jdbc.password>password</jdbc.password> <!-- allow this node to find other nodes via TCP/IP --> <hazelcast.network.tcpip>true</hazelcast.network.tcpip> <!-- set to true if your load balancer supports stick sessions --> <hazelcast.http.stickysessions>false</hazelcast.http.stickysessions> </systemPropertyVariables> <libArtifacts> <!-- ensure MySQL drivers are available --> <libArtifact> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.32</version> </libArtifact> </libArtifacts> </product> <!-- Node 2 --> <product> <id>bitbucket</id> <instanceId>bitbucket-node-2</instanceId> <version>${bitbucket.version}</version> <dataVersion>${bitbucket.data.version}</dataVersion> <!-- override the HTTP port used for this node --> <httpPort>7992</httpPort> <systemPropertyVariables> <bitbucket.shared.home>${project.basedir}/target/bitbucket-node-1/home/shared</bitbucket.shared.home> <!-- override the SSH port used for this node --> <plugin.ssh.port>7998</plugin.ssh.port> <!-- override database settings so both nodes use a single database --> <jdbc.driver>com.mysql.jdbc.Driver</jdbc.driver> <jdbc.url>jdbc:mysql://localhost:3306/bitbucket?characterEncoding=utf8&useUnicode=true&sessionVariables=storage_engine%3DInnoDB</jdbc.url> <jdbc.user>bitbucketuser</jdbc.user> <jdbc.password>password</jdbc.password> <!-- allow cluster nodes to find each other over TCP/IP thus enabling clustering for this node --> <hazelcast.network.tcpip>true</hazelcast.network.tcpip> <!-- set to true if your load balancer supports stick sessions --> <hazelcast.http.stickysessions>false</hazelcast.http.stickysessions> </systemPropertyVariables> <libArtifacts> <!-- ensure MySQL drivers are available --> <libArtifact> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.32</version> </libArtifact> </libArtifacts> </product> </products> <testGroups> <!-- tell AMPS / Maven which products ie nodes to run for the named testGroup 'clusterTestGroup' --> <testGroup> <id>clusterTestGroup</id> <productIds> <productId>bitbucket-node-1</productId> <productId>bitbucket-node-2</productId> </productIds> </testGroup> </testGroups> </configuration> </plugin> ... </plugins> </build> ... <properties> <bitbucket.version>4.0.0</bitbucket.version> <bitbucket.data.version>4.0.0</bitbucket.data.version> <amps.version>6.1.0</amps.version> </properties>
Warning:
amps.version
should be set to the same version as the one the minimum supported Bitbucket Data Center for your plugin uses. Use of dependencyManagement
and <scope>import</scope>
in you pom.xml
as discussed earlier will only import dependencies, not properties or plugins
so this value will need to be manually synchronized with Bitbucket Data Center's as you change your minimum supported Bitbucket Data Center.
To run the cluster configured above via Atlassian AMPS you would run:
1 2atlas-run --testGroup clusterTestGroup
To run your integration tests in Maven against the cluster configured above, the following would normally suffice:
1 2atlas-mvn clean install
For both methods above you will almost always want a load-balancer running to balance HTTP and SSH traffic between the nodes (and so that you can use a single port per protocol to communicate with the cluster). Taking the above configuration as an example you would want your load-balancer to balance HTTP traffic on port 7990 (standalone Bitbucket Data Center's HTTP default) to ports 7991 and 7992. For SSH traffic you would want it to balance SSH traffic on port 7999 (standalone Bitbucket Data Center's SSH default) to ports 7997 and 7998.
Atlassian provides a simple Maven plugin which you can configure and run as a load balancer. Again taking the above configuration as an example, you would add the following to your Maven POM:
1 2<build> <plugins> <plugin> <groupId>com.atlassian.maven.plugins</groupId> <artifactId>load-balancer-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>start-load-balancer</id> <phase>pre-integration-test</phase> <goals> <goal>start</goal> </goals> </execution> <execution> <id>stop-load-balancer</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> <configuration> <balancers> <balancer> <port>7990</port> <targets> <target> <port>7991</port> </target> <target> <port>7992</port> </target> </targets> </balancer> <balancer> <port>7999</port> <targets> <target> <port>7997</port> </target> <target> <port>7998</port> </target> </targets> </balancer> </balancers> </configuration> </plugin> </plugins> </build>
When you run your integration tests from Maven, before starting the cluster, this plugin will start a load balancer as configured and stop it once your tests have finished and the cluster has been shut down.
If you start your Bitbucket Data Center cluster via atlas-run --testGroup clusterTestGroup
, you can run the load balancer separately via:
1 2atlas-mvn com.atlassian.maven.plugins:load-balancer-maven-plugin:1.1:run
Bitbucket Data Center instances may also have one or more mirror node(s) in addition to the primary ("upstream") instance. Plugins cannot be installed on mirrors directly, so for the most part plugins do not even need to be aware of their existence.
But there are a few things that plugins need to be aware of to ensure that they play nicely with a Bitbucket Data
Center instance that has mirrors. In particular, plugins should avoid modifying repository state under
BITBUCKET_SHARED_HOME/data/repositories
directly; this is not even a good idea in standalone instances due
to the state of Pull Requests, SCM cache, and other parts of the system which may become stale as a result. In an instance
with mirror(s), though, if plugins modify repository state directly the mirrors also won't see the modification to the
repository state on the upstream and will become stale. If a plugin absolutely must modify repository state in the file
system, it should publish an appropriate event that implements RepositoryRefsChangedEvent
.
1 2Repository repository = ...; Collection<RefChange> changes = ...; ApplicationUser myUser = ...; securityService.impersonating(myUser, "Fetch mirror").call(new UncheckedOperation<Void>() { @Override public Void perform() { eventPublisher.publish(new MyCustomRefsChangedEvent(this, repository, changes)); return null; } });
This will tell mirrors (as well as other consumers) to synchronize their state with the latest changes. There is a
convenient AbstractRepositoryRefsChangedEvent
class that plugins may extend to provide their own specific
implementations of RepositoryRefsChangedEvent
.
Bitbucket Data Center instances may also be replicated at the database and file system level to a standby instance
configured with a disaster.recovery
option, that can take over from the primary instance in a disaster scenario.
Plugins installed on the primary instance will be automatically replicated to the standby instance, so again in most
cases plugins do not even need to be aware that all of this may be happening.
But there are a few areas where plugins may need to be aware of disaster recovery and ensure that they work seamlessly on customers' standby instances. Because a standby instance's database and home directory are replicated live from the primary, after a disaster recovery failover event they may be slightly inconsistent with each other. Bitbucket's own core functionality for the most part has been written to tolerate such inconsistencies rather than fail with errors: plugins not written with the same resilience may need work to be more tolerant of file system state and database state being updated out of order and be in "impossible" states that cannot occur on a normal running instance.
An example would be a plugin that keeps Git commit hashes in a database table: normally after reading these off the
file system their continued existence can be assumed and used as input to subsequent Git commands. But if the file
system replication is slightly behind the database, a GitCommand
that uses a commit hash taken from a database table
may throw NoSuchCommitException
, so callers should be robust to this scenario.
(Note that such inconsistencies can already occur even in standalone instances due to power failures, restored backups that are slightly out of sync, and so on, so it's a good idea for plugin code to build in such resilience anyway.)
Standby instances generally only have replicas of the primary instance's database and shared home directory, so any
state maintained by a plugin under BITBUCKET_SHARED_HOME
should be available (though perhaps not with 100%
consistency) on standby instances, but anything outside these locations probably won't unless the customer has made
special care to replicate it. If a plugin maintains external indexes or other state in such locations it may make sense
to hook into the DisasterRecoveryTriggeredEvent
that is fired on a disaster recovery failover event to check or
rebuild those indexes, if they are not automatically self-healing.
When you list your first Data Center compatible plugin version in
the Marketplace, modify
your atlassian-plugin.xml
descriptor file. This tells the Marketplace
and UPM that your plugin is Data Center compatible. Add the following
parameters inside the plugin-info
section:
1 2<param name="atlassian-data-center-status">compatible</param> <param name="atlassian-data-center-compatible">true</param>
atlassian-data-center-status
parameter indicates to Marketplace and UPM that your app has been submitted for technical review according to these Data Center requirements.atlassian-data-center-compatible
parameter was previously used to indicate Data Center compatibility and should be included for backward compatibility with older UPM versions.Here's an example of a generic plugin-info
block with these parameters:
1 2<plugin-info> <description>${project.description}</description> <version>${project.version}</version> <vendor name="${project.organization.name}" url="${project.organization.url}" /> <param name="atlassian-data-center-status">compatible</param> <param name="atlassian-data-center-compatible">true</param> </plugin-info>
Rate this page: