Atlassian Marketplace Developer

Atlassian Marketplace Developer

Last updatedOct 12, 2020

Rate this page:

Data Center App Performance Toolkit User Guide For Confluence

This document walks you through the process of testing your app on Confluence using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.

In this document, we cover the use of the Data Center App Performance Toolkit on two types of environments:

Development environment: Confluence Data Center environment for a test run of Data Center App Performance Toolkit and development of app-specific actions. We recommend you use the AWS Quick Start for Confluence Data Center with the parameters prescribed here.

  1. Set up a development environment Confluence Data Center on AWS.
  2. Create a dataset for the development environment.
  3. Run toolkit on the development environment locally.
  4. Develop and test app-specific actions locally.

Enterprise-scale environment: Confluence Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the AWS Quick Start for Confluence Data Center with the parameters prescribed below. These parameters provision larger, more powerful infrastructure for your Confluence Data Center.

  1. Set up an enterprise-scale environment Confluence Data Center on AWS.
  2. Load an enterprise-scale dataset on your Confluence Data Center deployment.
  3. Set up an execution environment for the toolkit.
  4. Running the test scenarios from execution environment against enterprise-scale Confluence Data Center.

Development environment

Running the tests in a development environment helps familiarize you with the toolkit. It'll also provide you with a lightweight and less expensive environment for developing. Once you're ready to generate test results for the Marketplace Data Center Apps Approval process, run the toolkit in an enterprise-scale environment.

1. Setting up Confluence Data Center development environment

We recommend that you set up a this development using the AWS Quick Start for Confluence Data Center. All the instructions on this page are optimized for AWS. If you already have an existing Confluence Data Center environment, you can also use that too (if so, skip to Create a dataset for the development environment).

Using the AWS Quick Start for Confluence

If you are a new user, perform an end-to-end deployment. This involves deploying Confluence into a new ASI.

If you have already deployed the ASI separately by using the ASI Quick Start or by deploying another Atlassian product (Jira, Bitbucket, or Confluence Data Center), deploy Confluence into your existing ASI.

You are responsible for the cost of AWS services used while running this Quick Start reference deployment. This Quick Start doesn't have any additional prices. See Amazon EC2 pricing for more detail.

To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.

AWS cost estimation for the development environment

AWS Confluence Data Center development environment infrastructure costs about 10 - 15$ per working week depending on such factors like region, instance type, deployment type of DB, and other.

Quick Start parameters for development environment

All important parameters are listed and described in this section. For all other remaining parameters, we recommend using the Quick Start defaults.

Confluence setup

ParameterRecommended Value
Collaborative editing modesynchrony-local
Confluence Version6.13.13 or 7.0.5 or 7.4.4

Cluster nodes

ParameterRecommended value
Cluster node instance typet3.medium (we recommend this instance type for its good balance between price and performance in testing environments)
Maximum number of cluster nodes1
Minimum number of cluster nodes1
Cluster node instance volume size50

Database

ParameterRecommended value
Database instance classdb.t2.medium
RDS Provisioned IOPS1000
Master (admin) passwordPassword1!
Enable RDS Multi-AZ deploymentfalse
Application user database passwordPassword1!
Database storage200

Networking (for new ASI)

ParameterRecommended value
Trusted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Availability ZonesSelect two availability zones in your region
Permitted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Make instance internet facingTrue
Key NameThe EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info.

Networking (for existing ASI)

ParameterRecommended value
Make instance internet facingTrue
Permitted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Key NameThe EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info.

Running the setup wizard

After successfully deploying Confluence Data Center in AWS, you'll need to configure it:

  1. In the AWS console, go to Services > CloudFormation > Stack > Stack details > Select your stack.
  2. On the Outputs tab, copy the value of the LoadBalancerURL key.
  3. Open LoadBalancerURL in your browser. This will take you to the Confluence setup wizard.
  4. On the Get apps page, do not select addition apps, just click Next.
  5. On the next page, populate the Your License Key field by either:
    • Using your existing license, or
    • Generating an evaluation license, or
    • Contacting Atlassian to be provided two time-bomb licenses for testing. Ask for it in your DCHELP ticket. Click Next.
  6. On the Load Content page, click on the Empty Site.
  7. On the Configure User Management page, click on the Mane users and groups within Confluence.
  8. On the Configure System Administrator Account page, populate the following fields:
    • Username: admin (recommended)
    • Name: admin (recommended)
    • Email Address: email address of the admin user
    • Password: admin (recommended)
    • Confirm Password: admin (recommended) Click Next.
  9. On the Setup Successful page, click on the Start.
  10. After going through the welcome setup, enter any Space name to create an initial space and click Continue.
  11. Enter the first page title and click Publish.

2. Generate dataset for development environment

After creating the development environment Confluence Data Center, generate test dataset to run Data Center App Performance Toolkit:

  • 1 space with 1-5 pages and 1-5 blog posts.

3. Run toolkit on the development environment locally

  1. Clone Data Center App Performance Toolkit locally.
  2. Follow the README.md instructions to set up toolkit locally.
  3. Navigate to dc-app-performance-toolkit/app folder.
  4. Open the confluence.yml file and fill in the following variables:

    • application_hostname: your_dc_confluence_instance_hostname without protocol.
    • application_protocol: http or https.
    • application_port: for HTTP - 80, for HTTPS - 443, 8080, 1990 or your instance-specific port.
    • secure: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate.
    • application_postfix: it is empty by default; e.g., /confluence for url like this http://localhost:1990/confluence.
    • admin_login: admin user username.
    • admin_password: admin user password.
    • load_executor: executor for load tests. Valid options are jmeter (default) or locust.
    • concurrency: 2 - number of concurrent JMeter/Locust users.
    • test_duration: 5m - duration of the performance run.
    • ramp-up: 5s - amount of time it will take JMeter or Locust to add all test users to test execution.
    • total_actions_per_hour: 2000 - number of total JMeter/Locust actions per hour.
    • WEBDRIVER_VISIBLE: visibility of Chrome browser during selenium execution (False is by default).
  5. Run bzt.

    1
    bzt confluence.yml
  6. Review the resulting table in the console log. All JMeter/Locust and Selenium actions should have 100% Success rate.
    In case some actions does not have 100% Success rate refer to the following logs in dc-app-performance-toolkit/app/results/confluence/YY-MM-DD-hh-mm-ss folder:

    1
    2
    3
    4
    5
    6
    - `results_summary.log`: detailed run summary
    - `results.csv`: aggregated .csv file with all actions and timings
    - `bzt.log`: logs of the Taurus tool execution
    - `jmeter.*`: logs of the JMeter tool execution
    - `locust.*`: logs of the Locust tool execution (in case you use Locust as load_executor in confluence.yml)
    - `pytest.*`: logs of Pytest-Selenium execution

Do not proceed with the next step until you have all actions 100% Success rate. Ask support if above logs analysis did not help.


4. Develop and test app-specific action locally

Data Center App Performance Toolkit has its own set of default test actions for Confluence Data Center: JMeter/Locust and Selenium for load and UI tests respectively.

App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.

  1. Define main use case of your app. Usually it is one or two main app use cases.
  2. Your app adds new UI elements in Confluence Data Center - Selenium app-specific action has to be developed.
  3. Your app introduces new endpoint or extensively calls existing Confluence Data Center API - JMeter/Locust app-specific actions has to be developed.
    JMeter and Locust actions are interchangeable, so you could select the tool you prefer:

We strongly recommend to develop your app-specific actions on the development environment to reduce AWS infrastructure costs.

Custom dataset

You can filter your own app-specific pages/blog posts for your app-specific actions.

  1. Create app-specific pages/blog posts that have specific anchor in title, e.g. AppPage anchor and pages titles like AppPage1, AppPage2, AppPage3.
  2. Go to the search page of your Confluence Data Center - CONFLUENCE_URL/dosearchsite.action?queryString= (Confluence versions 6.X and below) or just click to search field in UI (Confluence versions 7.X and higher).
  3. Write CQL that filter just your pages or blog posts from step 1, e.g. title ~ 'AppPage*'.
  4. Edit Confluence configuration file dc-app-performance-toolkit/app/confluence.yml:
    • custom_dataset_query: CQL from step 3.

Next time when you run toolkit, custom dataset pages will be stored to the dc-app-performance-toolkit/app/datasets/confluence/custom_pages.csv with columns: page_id, space_key.

Example of app-specific Selenium action development with custom dataset

You develop an app that adds additional UI elements to Confluence pages or blog posts. In this case, you should develop Selenium app-specific action:

  1. Create app-specific Confluence pages with AppPagee anchor in title: AppPage1, AppPage2, *AppPage3, etc.
  2. Go to the search page of your Confluence Data Center - CONFLUENCE_URL/dosearchsite.action?queryString= (Confluence versions 6.X and below) or just click to search field in UI (Confluence versions 7.X and higher) and check if CQL is correct: title ~ 'AppPage*'.
  3. Edit dc-app-performance-toolkit/app/confluence.yml configuration file and set custom_dataset_query: "title ~ 'AppPage*'".
  4. Extend example of app-specific action in dc-app-performance-toolkit/app/extension/confluence/extension_ui.py.
    So, our test have to open page or blog post with app-specific UI element and measure time to load of this app-specific page or blog post.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from selenium.webdriver.common.by import By
from selenium_ui.conftest import print_timing
from util.conf import CONFLUENCE_SETTINGS

from selenium_ui.base_page import BasePage


def app_specific_action(webdriver, datasets):
    page = BasePage(webdriver)
    app_specific_page = datasets['custom_pages']
    app_specific_page_id = app_specific_page[0]


    @print_timing("selenium_app_custom_action")
    def measure():

        @print_timing("selenium_app_custom_action:view_page")
        def sub_measure():
            page.go_to_url(f"{CONFLUENCE_SETTINGS.server_url}/pages/viewpage.action?pageId={app_specific_page_id}")
            page.wait_until_visible((By.ID, "title-text"))  # Wait for title field visible
            page.wait_until_visible((By.ID, "ID_OF_YOUR_APP_SPECIFIC_UI_ELEMENT"))  # Wait for you app-specific UI element by ID selector
        sub_measure()
    measure()
  1. In dc-app-performance-toolkit/app/selenium_ui/confluence_ui.py, review and uncomment the following block of code to make newly created app-specific actions executed:
1
2
# def test_1_selenium_custom_action(confluence_webdriver, confluence_datasets, confluence_screen_shots):
#     extension_ui.app_specific_action(confluence_webdriver, confluence_datasets)
  1. Run toolkit with bzt confluence.yml command to ensure that all Selenium actions including app_specific_action are successful.

Example of app-specific Locust/JMeter action development

You develop an app that introduces new GET and POST endpoints in Confluence Data Center. In this case, you should develop Locust or JMeter app-specific action.

Locust app-specific action development example

  1. Extend example of app-specific action in dc-app-performance-toolkit/app/extension/confluence/extension_locust.py, so that test will call the endpoint with GET request, parse response use these data to call another endpoint with POST request and measure response time.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import re
from locustio.common_utils import init_logger, confluence_measure

logger = init_logger(app_type='confluence')


@confluence_measure
def app_specific_action(locust):
    r = locust.get('/app/get_endpoint')  # call app-specific GET endpoint
    content = r.content.decode('utf-8')   # decode response content

    token_pattern_example = '"token":"(.+?)"'
    id_pattern_example = '"id":"(.+?)"'
    token = re.findall(token_pattern_example, content)  # get TOKEN from response using regexp
    id = re.findall(id_pattern_example, content)    # get ID from response using regexp

    logger.locust_info(f'token: {token}, id: {id}')  # log information for debug when verbose is true in jira.yml file
    if 'assertion string' not in content:
        logger.error(f"'assertion string' was not found in {content}")
    assert 'assertion string' in content  # assert specific string in response content

    body = {"id": id, "token": token}  # include parsed variables to POST request body
    headers = {'content-type': 'application/json'}
    r = locust.post('/app/post_endpoint', body, headers)  # call app-specific POST endpoint
    content = r.content.decode('utf-8')
    if 'assertion string after successful POST request' not in content:
        logger.error(f"'assertion string after successful POST request' was not found in {content}")
    assert 'assertion string after successful POST request' in content  # assertion after POST request
  1. In dc-app-performance-toolkit/app/confluence.yml set load_executor: locust to make locust as load executor.
  2. Locust uses actions percentage as relative weights, so if view_page: 54 and standalone_extensions: 108 that means thats standalone_extension will be called twice more.
    Set standalone_extension weight in accordance with the expected frequency of your app use case compared with other base actions.
  3. Run toolkit with bzt confluence.yml command to ensure that all Locust actions including app_specific_action are successful.

JMeter app-specific action development example

  1. Navigate to dc-app-performance-toolkit/app folder and launch JMeter by ~/.bzt/jmeter-taurus/5.2.1/bin/jmeter (it is important to launch from app folder), open dc-app-performance-toolkit/app/jmeter/confluence.jmx.
  2. Open Confluence thread group > actions per login and navigate to standalone_extension Confluence JMeter standalone extension
  3. Add GET HTTP Request: right-click to standalone_extension > Add > Sampler HTTP Request, chose method GET and set endpoint in Path. Confluence JMeter standalone GET
  4. Add Regular Expression Extractor: right-click to to newly created HTTP Request > Add > Post processor > Regular Expression Extractor Confluence JMeter standalone regexp
  5. Add Response Assertion: right-click to newly created HTTP Request > Add > Assertions > Response Assertion and add assertion with Contains, Matches, Equals, etc types. Confluence JMeter standalone assertions
  6. Add POST HTTP Request: right-click to standalone_extension > Add > Sampler HTTP Request, chose method POST, set endpoint in Path and add Parameters or Body Data if needed.
  7. Navigate to Global Variables and modify default values of hostname, port, protocol and postfix variables. Confluence JMeter standalone global vars
  8. Navigate to load profile and set perc_standalone_extension default percentage to 100. Confluence JMeter standalone load profile
  9. Right-click on View Results Tree and enable this controller.
  10. Click Start button and make sure that login_and_view_dashboard and standalone_extension are successful.
  11. Right-click on View Results Tree and disable this controller.
  12. Click Save button.
  13. To make standalone_extension executable during toolkit run edit dc-app-performance-toolkit/app/confluencec.yml and set execution percentage of standalone_extension accordingly to your use case frequency.
  14. Run toolkit to ensure that all JMeter actions including standalone_extension are successful.
Using JMeter variables from the base script

Use or access the following variables of the extension script from the base script. They can also be inherited.

  • ${blog_id} - blog post id being viewed or modified (e.g. 23766699)
  • ${blog_space_key} - blog space key (e.g. PFSEK)
  • ${page_id} - page if being viewed or modified (e.g. 360451)
  • ${space_key} - page space key (e.g. TEST)
  • ${file_path} - path of file to upload (e.g. datasets/confluence/static-content/upload/test5.jpg)
  • ${file_type} - type of the file (e.g. image/jpeg)
  • ${file_name} - name of the file (e.g. test5.jpg)
  • ${username} - the logged in username (e.g. admin)

App-specific actions are required. Do not proceed with the next step until you have completed app-specific actions development and got successful results from toolkit run.


Enterprise-scale environment

After adding your custom app-specific actions, you should now be ready to run the required tests for the Marketplace Data Center Apps Approval process. To do this, you'll need an enterprise-scale environment.

5. Setting up Confluence Data Center enterprise-scale environment

We recommend that you use the AWS Quick Start for Confluence Data Center to deploy a Confluence Data Center testing environment. This Quick Start will allow you to deploy Confluence Data Center with a new Atlassian Standard Infrastructure (ASI) or into an existing one.

The ASI is a Virtual Private Cloud (VPC) consisting of subnets, NAT gateways, security groups, bastion hosts, and other infrastructure components required by all Atlassian applications, and then deploys Confluence into this new VPC. Deploying Confluence with a new ASI takes around 50 minutes. With an existing one, it'll take around 30 minutes.

Using the AWS Quick Start for Confluence

If you are a new user, perform an end-to-end deployment. This involves deploying Confluence into a new ASI.

If you have already deployed the ASI separately by using the ASI Quick StartASI Quick Start or by deploying another Atlassian product (Jira, Bitbucket, or Confluence Data Center), deploy Confluence into your existing ASI.

You are responsible for the cost of the AWS services used while running this Quick Start reference deployment. There is no additional price for using this Quick Start. For more information, go to aws.amazon.com/pricing.

To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.

AWS cost estimation

AWS Pricing Calculator provides an estimate of usage charges for AWS services based on certain information you provide. Monthly charges will be based on your actual usage of AWS services, and may vary from the estimates the Calculator has provided.

*The prices below are approximate and may vary depending on factors such as (region, instance type, deployment type of DB, etc.)

StackEstimated hourly cost ($)
One Node Confluence DC1.2 - 1.7
Two Nodes Confluence DC2 - 3
Four Nodes Confluence DC3.6 - 5.6

Stop Confluence cluster nodes

To reduce AWS infrastructure costs you could stop Confluence nodes when the cluster is standing idle.
Confluence node might be stopped by using Suspending and Resuming Scaling Processes.

To stop one node within the Confluence cluster follow the instructions: 1. Go to EC2 Auto Scaling Groups and open the necessary group to which belongs the node you want to stop. 1. Press Edit (in case you have New EC2 experience UI mode enabled, press Edit on Advanced configuration) and add HealthCheck to the Suspended Processes. Amazon EC2 Auto Scaling stops marking instances unhealthy as a result of EC2 and Elastic Load Balancing health checks. 1. Go to Instances and stop Confluence node.

To return Confluence node into a working state follow the instructions:
1. Go to Instances and start Confluence node, wait a few minutes for Confluence node to become responsible. 1. Go to EC2 Auto Scaling Groups and open the necessary group to which belongs the node you want to start. 1. Press Edit (in case you have New EC2 experience UI mode enabled, press Edit on Advanced configuration) and remove HealthCheck from Suspended Processes of Auto Scaling Group.

Quick Start parameters

All important parameters are listed and described in this section. For all other remaining parameters, we recommend using the Quick Start defaults.

Confluence setup

ParameterRecommended Value
Collaborative editing modesynchrony-local
Confluence Version6.13.13 or 7.0.5 or 7.4.4

The Data Center App Performance Toolkit officially supports:

Cluster nodes

ParameterRecommended Value
Cluster node instance typem5.4xlarge
Maximum number of cluster nodes1
Minimum number of cluster nodes1
Cluster node instance volume size200

We recommend m5.4xlarge to strike the balance between cost and hardware we see in the field for our enterprise customers. More info could be found in public recommendations.

The Data Center App Performance Toolkit framework is also set up for concurrency we expect on this instance size. As such, underprovisioning will likely show a larger performance impact than expected.

Database

ParameterRecommended Value
Database instance classdb.m5.xlarge
RDS Provisioned IOPS1000
Master (admin) passwordPassword1!
Enable RDS Multi-AZ deploymentfalse
Application user database passwordPassword1!
Database storage200

The Master (admin) password will be used later when restoring the SQL database dataset. If password value is not set to default, you'll need to change DB_PASS value manually in the restore database dump script (later in Preloading your Confluence deployment with an enterprise-scale dataset).

Networking (for new ASI)

ParameterRecommended Value
Trusted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Availability ZonesSelect two availability zones in your region
Permitted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Make instance internet facingtrue
Key NameThe EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info.

Networking (for existing ASI)

ParameterRecommended Value
Make instance internet facingtrue
Permitted IP range0.0.0.0/0 (for public access) or your own trusted IP range
Key NameThe EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info.

Running the setup wizard

After successfully deploying Confluence Data Center in AWS, you'll need to configure it:

  1. In the AWS console, go to Services > CloudFormation > Stack > Stack details > Select your stack.
  2. On the Outputs tab, copy the value of the LoadBalancerURL key.
  3. Open LoadBalancerURL in your browser. This will take you to the Confluence setup wizard.
  4. On the Get apps page, do not select addition apps, just click Next.
  5. On the next page, populate the Your License Key field by either:
    • Using your existing license, or
    • Generating an evaluation license, or
    • Contacting Atlassian to be provided two time-bomb licenses for testing. Ask for it in your DCHELP ticket. Click Next.
  6. On the Load Content page, click on the Empty Site.
  7. On the Configure User Management page, click on the Mane users and groups within Confluence.
  8. On the Configure System Administrator Account page, populate the following fields:
    • Username: admin (recommended)
    • Name: admin (recommended)
    • Email Address: email address of the admin user
    • Password: admin (recommended)
    • Confirm Password: admin (recommended) Click Next.
  9. On the Setup Successful page, click on the Start.
  10. After going through the welcome setup, enter any Space name to create an initial space and click Continue.
  11. Enter the first page title and click Publish.

After Preloading your Confluence deployment with an enterprise-scale dataset, the admin user will have admin/admin credentials.

6. Preloading your Confluence deployment with an enterprise-scale dataset

Data dimensions and values for an enterprise-scale dataset are listed and described in the following table.

Data dimensionsValue for an enterprise-scale dataset
Pages~900 000
Blogposts~100 000
Attachments~2 300 000
Comments~6 000 000
Spaces~5 000
Users~5 000

All the datasets use the standard admin/admin credentials.

Pre-loading the dataset is a three-step process:

  1. Importing the main dataset. To help you out, we provide an enterprise-scale dataset you can import either via the populate_db.sh script.
  2. Restoring attachments. We also provide attachments, which you can pre-load via an upload_attachments.sh script.
  3. Re-indexing Confluence Data Center. For more information, go to Re-indexing Confluence.

The following subsections explain each step in greater detail.

Importing the main dataset

You can load this dataset directly into the database (via a populate_db.sh script).

Loading the dataset via populate_db.sh script (~90 min)

We recommend doing this via the CLI.

To populate the database with SQL:

  1. In the AWS console, go to Services > EC2 > Instances.
  2. On the Description tab, do the following:
    • Copy the Public IP of the Bastion instance.
    • Copy the Private IP of the Confluence node instance.
  3. Using SSH, connect to the Confluence node via the Bastion instance:

    For Windows, use Putty to connect to the Confluence node over SSH. For Linux or MacOS:

    1
    2
    3
    4
    5
    ssh-add path_to_your_private_key_pem
    export BASTION_IP=bastion_instance_public_ip
    export NODE_IP=node_private_ip
    export SSH_OPTS='-o ServerAliveInterval=60 -o ServerAliveCountMax=30'
    ssh ${SSH_OPTS} -o "proxycommand ssh -W %h:%p ${SSH_OPTS} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}

    For more information, go to Connecting your nodes over SSH.

  4. Download the populate_db.sh script and make it executable:

    1
    wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/confluence/populate_db.sh && chmod +x populate_db.sh
  5. Review the following Variables section of the script:

    1
    2
    3
    4
    5
    6
    7
    8
    INSTALL_PSQL_CMD="amazon-linux-extras install -y postgresql10"
    DB_CONFIG="/var/atlassian/application-data/confluence/confluence.cfg.xml"
    CONFLUENCE_CURRENT_DIR="/opt/atlassian/confluence/current"
    CONFLUENCE_DB_NAME="confluence"
    CONFLUENCE_DB_USER="postgres"
    CONFLUENCE_DB_PASS="Password1!"
    CONFLUENCE_VERSION_FILE="/media/atl/confluence/shared-home/confluence.version"
    DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/confluence"
  6. Run the script:

    1
    ./populate_db.sh | tee -a populate_db.log

Do not close or interrupt the session. It will take some time to restore SQL database. When SQL restoring is finished, an admin user will have admin/admin credentials.

In case of a failure, check the Variables section and run the script one more time.

Restoring attachments (~3 hours)

After Importing the main dataset, you'll now have to pre-load an enterprise-scale set of attachments.

  1. Using SSH, connect to the Confluence node via the Bastion instance:

    For Windows, use Putty to connect to the Confluence node over SSH. For Linux or MacOS:

    1
    2
    3
    4
    5
    ssh-add path_to_your_private_key_pem
    export BASTION_IP=bastion_instance_public_ip
    export NODE_IP=node_private_ip
    export SSH_OPTS='-o ServerAliveInterval=60 -o ServerAliveCountMax=30'
    ssh ${SSH_OPTS} -o "proxycommand ssh -W %h:%p ${SSH_OPTS} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}

    For more information, go to Connecting your nodes over SSH.

  2. Download the upload_attachments.sh script and make it executable:

    1
    wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/confluence/upload_attachments.sh && chmod +x upload_attachments.sh
  3. Review the following Variables section of the script:

    1
    2
    3
    4
    5
    DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/confluence"
    ATTACHMENTS_TAR="attachments.tar.gz"
    ATTACHMENTS_DIR="attachments"
    TMP_DIR="/tmp"
    EFS_DIR="/media/atl/confluence/shared-home"
  4. Run the script:

    1
    ./upload_attachments.sh | tee -a upload_attachments.log

Do not close or interrupt the session. It will take some time to upload attachments to Elastic File Storage (EFS).

Re-indexing Confluence Data Center (~2-4 hours)

For more information, go to Re-indexing Confluence.

  1. Log in as a user with the Confluence System Administrators global permission.
  2. Go to cog icon > General Configuration > Content Indexing.
  3. Click Rebuild and wait until re-indexing is completed.

Confluence will be unavailable for some time during the re-indexing process.

Create Index Snapshot (~30 min)

For more information, go to Administer your Data Center search index.

  1. Log in as a user with the Confluence System Administrators global permission.
  2. Create any new page with a random content (without a new page index snapshot job will not be triggered).
  3. Go to cog icon > General Configuration > Scheduled Jobs.
  4. Find Clean Journal Entries job and click Run.
  5. Make sure that Confluence index snapshot was created. To do that, use SSH to connect to the Confluence node via Bastion (where NODE_IP is the IP of the node):

    1
    2
    3
    4
    5
    ssh-add path_to_your_private_key_pem
    export BASTION_IP=bastion_instance_public_ip
    export NODE_IP=node_private_ip
    export SSH_OPTS='-o ServerAliveInterval=60 -o ServerAliveCountMax=30'
    ssh ${SSH_OPTS} -o "proxycommand ssh -W %h:%p ${SSH_OPTS} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}
  6. Download the index-snapshot.sh file. Then, make it executable and run it:

    1
    2
    wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/confluence/index-snapshot.sh && chmod +x index-snapshot.sh
    ./index-snapshot.sh | tee -a index-snapshot.log

    Index snapshot creation time is about 20-30 minutes. When index snapshot is successfully created, the following will be displayed in console output:

    1
    Snapshot was created successfully.

7. Setting up an execution environment

For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running toolkit from dedicated instance but not from local machine eliminates network fluctuations and guarantees stable CPU and memory performance.

  1. Launch AWS EC2 instance. Instance type: c5.2xlarge, OS: select from Quick Start Ubuntu Server 18.04 LTS.
  2. Connect to the instance using SSH or the AWS Systems Manager Sessions Manager.

    1
    ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
  3. Install Docker. Setup manage Docker as a non-root user.

  4. Go to GitHub and create a fork of dc-app-performance-toolkit.
  5. Clone the fork locally, then edit the confluence.yml configuration file. Set enterprise-scale Confluence Data Center parameters:
1
2
3
4
5
6
7
8
9
10
11
12
    application_hostname: test_confluence_instance.atlassian.com   # Confluence DC hostname without protocol and port e.g. test-confluence.atlassian.com or localhost
    application_protocol: http      # http or https
    application_port: 80            # 80, 443, 8080, 2990, etc
    secure: True                    # Set False to allow insecure connections, e.g. when using self-signed SSL certificate
    application_postfix:            # e.g. /confluence in case of url like http://localhost:2990/confluence
    admin_login: admin
    admin_password: admin
    load_executor: jmeter           # jmeter and locust are supported. jmeter by default.
    concurrency: 200                # number of concurrent virtual users for jmeter or locust scenario
    test_duration: 45m
    ramp-up: 5m                     # time to spin all concurrent users
    total_actions_per_hour: 54500   # number of total JMeter/Locust actions per hour.
  1. Push your changes to the forked repository.
  2. Connect to the AWS EC2 instance and clone forked repository.

At this stage app-specific actions are not needed yet. Use code from master branch with your confluence.yml changes.

You'll need to run the toolkit for each test scenario in the next section.


8. Running the test scenarios from execution environment against enterprise-scale Confluence Data Center

Using the Data Center App Performance Toolkit for Performance and scale testing your Data Center app involves two test scenarios:

Each scenario will involve multiple test runs. The following subsections explain both in greater detail.

Scenario 1: Performance regression

This scenario helps to identify basic performance issues without a need to spin up a multi-node Confluence DC. Make sure the app does not have any performance impact when it is not exercised.

Run 1 (~50 min)

To receive performance baseline results without an app installed:

  1. Use SSH to connect to execution environment.
  2. Run toolkit with docker:

    1
    2
    cd dc-app-performance-toolkit
    docker run --shm-size=4g  -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt confluence.yml
  3. View the following main results of the run in the dc-app-performance-toolkit/app/results/confluence/YY-MM-DD-hh-mm-ss folder:

    • results_summary.log: detailed run summary
    • results.csv: aggregated .csv file with all actions and timings
    • bzt.log: logs of the Taurus tool execution
    • jmeter.*: logs of the JMeter tool execution
    • pytest.*: logs of Pytest-Selenium execution

Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps.

Run 2 (~50 min)

To receive performance results with an app installed:

  1. Install the app you want to test.
  2. Run bzt.

    1
    2
     cd dc-app-performance-toolkit
     docker run --shm-size=4g  -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt confluence.yml

Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps.

Generating a performance regression report

To generate a performance regression report:

  1. Use SSH to connect to execution environment.
  2. Install the virtualenv as described in dc-app-performance-toolkit/README.md
  3. Navigate to the dc-app-performance-toolkit/app/reports_generation folder.
  4. Edit the performance_profile.yml file:
    • Under runName: "without app", in the fullPath key, insert the full path to results directory of Run 1.
    • Under runName: "with app", in the fullPath key, insert the full path to results directory of Run 2.
  5. Run the following command:

    1
    python csv_chart_generator.py performance_profile.yml
  6. In the dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and performance scenario summary report.

Analyzing report

Once completed, you will be able to review the action timings with and without your app to see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.

Scenario 2: Scalability testing

The purpose of scalability testing is to reflect the impact on the customer experience when operating across multiple nodes. For this, you have to run scale testing on your app.

For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Confluence DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Confluence DC app in a cluster.

Run 3 (~50 min)

To receive scalability benchmark results for one-node Confluence DC with app-specific actions, run bzt:

  1. Apply app-specific code changes to a new branch of forked repo.
  2. Use SSH to connect to execution environment.
  3. Pull cloned fork repo branch with app-specific actions.
  4. Run toolkit with docker:

    1
    2
     cd dc-app-performance-toolkit
     docker run --shm-size=4g  -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt confluence.yml

Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps.

Run 4 (~50 min)

To receive scalability benchmark results for two-node Confluence DC with app-specific actions:

  1. In the AWS console, go to CloudFormation > Stack details > Select your stack.
  2. On the Update tab, select Use current template, and then click Next.
  3. Enter 2 in the Maximum number of cluster nodes and the Minimum number of cluster nodes fields.
  4. Click Next > Next > Update stack and wait until stack is updated.
  5. Make sure that Confluence index successfully synchronized to the second node. To do that, use SSH to connect to the second node via Bastion (where NODE_IP is the IP of the second node):

    1
    2
    3
    4
    5
    ssh-add path_to_your_private_key_pem
    export BASTION_IP=bastion_instance_public_ip
    export NODE_IP=node_private_ip
    export SSH_OPTS='-o ServerAliveInterval=60 -o ServerAliveCountMax=30'
    ssh ${SSH_OPTS} -o "proxycommand ssh -W %h:%p ${SSH_OPTS} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}
  6. Once you're in the second node, download the index-sync.sh file. Then, make it executable and run it:

    1
    2
    wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/confluence/index-sync.sh && chmod +x index-sync.sh
    ./index-sync.sh | tee -a index-sync.log

    Index synchronizing time is about 10-30 minutes. When index synchronizing is successfully completed, the following lines will be displayed in console output:

    1
    2
    3
    Log file: /var/atlassian/application-data/confluence/logs/atlassian-confluence.log
    Index recovery is required for main index, starting now
    main index recovered from shared home directory
  7. Run toolkit with docker:

    1
    2
     cd dc-app-performance-toolkit
     docker run --shm-size=4g  -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt confluence.yml

Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps.

Run 5 (~50 min)

To receive scalability benchmark results for four-node Confluence DC with app-specific actions:

  1. Scale your Confluence Data Center deployment to 3 nodes as described in Run 4.
  2. Check Index is synchronized to new nodes the same way as in Run 4.
  3. Scale your Confluence Data Center deployment to 4 nodes as described in Run 4.
  4. Check Index is synchronized to new nodes the same way as in Run 4.
  5. Run toolkit with docker:

    1
    2
     cd dc-app-performance-toolkit
     docker run --shm-size=4g  -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt confluence.yml

Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps.

Generating a report for scalability scenario

To generate a scalability report:

  1. Use SSH to connect to execution environment.
  2. Navigate to the dc-app-performance-toolkit/app/reports_generation folder.
  3. Edit the scale_profile.yml file:
    • For runName: "Node 1", in the fullPath key, insert the full path to results directory of Run 3.
    • For runName: "Node 2", in the fullPath key, insert the full path to results directory of Run 4.
    • For runName: "Node 4", in the fullPath key, insert the full path to results directory of Run 5.
  4. Run the following command from the virtualenv:

    1
    python csv_chart_generator.py scale_profile.yml
  5. In the dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and summary report.

Analyzing report

Once completed, you will be able to review action timings on Confluence Data Center with different numbers of nodes. If you see a significant variation in any action timings between configurations, we recommend taking a look into the app implementation to understand the root cause of this delta.

After completing all your tests, delete your Confluence Data Center stacks.

Attaching testing results to DCHELP ticket

  1. Use scp command to copy dc-app-performance-toolkit/app/results folder to your local machine.
  2. Make sure you have five run results folders and two reports (remove all unsuccessful attempts).
  3. Zip dc-app-performance-toolkit/app/results folder and attach archive to DCHELP ticket.

Support

In case of technical questions, issues or problems with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.

Rate this page: