Rate this page:
This document walks you through the process of testing your app on Bitbucket using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on two types of environments:
Development environment: Bitbucket Data Center environment for a test run of Data Center App Performance Toolkit and development of app-specific actions.
Enterprise-scale environment: Bitbucket Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process.
Running the tests in a development environment helps familiarize you with the toolkit. It'll also provide you with a lightweight and less expensive environment for developing app-specific actions. Once you're ready to generate test results for the Marketplace Data Center Apps Approval process, run the toolkit in an enterprise-scale environment.
In case you are in the middle of Bitbucket DC app performance testing with the CloudFormation deployment option,
the process can be continued after switching to the 7.1.0
DCAPT version.
Checkout release 7.1.0
of the dc-app-performance-toolkit
repository:
1 2git checkout release-7.1.0
Use the docker container with the 7.1.0
release tag to run performance tests from docker:
1 2cd dc-app-performance-toolkit docker pull atlassian/dcapt:7.1.0 docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt:7.1.0 bitbucket.yml
The corresponding version of the user guide could be found in the dc-app-performance-toolkit/docs
folder or by this
link.
If specific version of the Bitbucket DC is required, please contact support in the community Slack.
You are responsible for the cost of AWS services used while running this Terraform deployment. See Amazon EC2 pricing for more detail.
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs. AWS Bitbucket Data Center development environment infrastructure costs about 20 - 40$ per working week depending on such factors like region, instance type, deployment type of DB, and other.
Bitbucket Data Center development environment is good for app-specific actions development. But not powerful enough for performance testing at scale. See Set up an enterprise-scale environment Bitbucket Data Center on AWS for more details.
Below process describes how to install low-tier Bitbucket DC with "small" dataset included:
Create access keys for IAM user.
Do not use root
user credentials for cluster creation. Instead, create an admin user.
Navigate to dc-app-performance-toolkit/app/util/k8s
folder.
Set AWS access keys created in step1 in aws_envs
file:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set required variables in dcapt-small.tfvars
file:
environment_name
- any name for you environment, e.g. dcapt-bitbucket-small
products
- bitbucket
bitbucket_license
- one-liner of valid bitbucket license without spaces and new line symbols
region
- AWS region for deployment. Do not change default region (us-east-2
). If specific region is required, contact support.
instance_types
- ["t3.2xlarge"]
New trial license could be generated on my atlassian.
Use BX02-9YO1-IN86-LO5G
Server ID for generation.
Optional variables to override:
bitbucket_version_tag
- Bitbucket version to deploy. Supported versions see in README.md.From local terminal (Git bash terminal for Windows) start the installation (~20 min):
1 2docker run --env-file aws_envs \ -v "$PWD/dcapt-small.tfvars:/data-center-terraform/config.tfvars" \ -v "$PWD/.terraform:/data-center-terraform/.terraform" \ -v "$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform ./install.sh -c config.tfvars
Copy product URL from the console output. Product url should look like http://a1234-54321.us-east-2.elb.amazonaws.com/bitbucket
.
All the datasets use the standard admin
/admin
credentials.
Make sure English language is selected as a default language on the > General configuration > Languages page. Other languages are not supported by the toolkit.
Clone Data Center App Performance Toolkit locally.
Follow the README.md instructions to set up toolkit locally.
Navigate to dc-app-performance-toolkit/app
folder.
Open the bitbucket.yml
file and fill in the following variables:
application_hostname
: your_dc_bitbucket_instance_hostname without protocol.application_protocol
: http or https.application_port
: for HTTP - 80, for HTTPS - 443, 8080, 1990 or your instance-specific port.secure
: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate.application_postfix
: /bitbucket - default postfix value for TerraForm deployment url like http://a1234-54321.us-east-2.elb.amazonaws.com/bitbucket
admin_login
: admin user username.admin_password
: admin user password.load_executor
: executor for load tests - jmeterconcurrency
: 1
- number of concurrent JMeter users.test_duration
: 5m
- duration of the performance run.ramp-up
: 1s
- amount of time it will take JMeter to add all test users to test execution.total_actions_per_hour
: 3270
- number of total JMeter actions per hour.WEBDRIVER_VISIBLE
: visibility of Chrome browser during selenium execution (False is by default).Run bzt.
1 2bzt bitbucket.yml
Review the resulting table in the console log. All JMeter and Selenium actions should have 95+% success rate.
In case some actions does not have 95+% success rate refer to the following logs in dc-app-performance-toolkit/app/results/bitbucket/YY-MM-DD-hh-mm-ss
folder:
results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionpytest.*
: logs of Pytest-Selenium executionDo not proceed with the next step until you have all actions 95+% success rate. Ask support if above logs analysis did not help.
Data Center App Performance Toolkit has its own set of default test actions for Bitbucket Data Center: JMeter and Selenium for load and UI tests respectively.
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
We strongly recommend developing your app-specific actions on the development environment to reduce AWS infrastructure costs.
You develop an app that adds some additional fields to specific types of Bitbucket issues. In this case, you should develop Selenium app-specific action:
dc-app-performance-toolkit/app/extension/bitbucket/extension_ui.py
.app_specific_action
as specific user uncomment app_specific_user_login
function in code example. Note, that in this case test_1_selenium_custom_action
should follow just before test_2_selenium_z_log_out
action.dc-app-performance-toolkit/app/selenium_ui/bitbucket_ui.py
, review and uncomment the following block of code to make newly created app-specific actions executed:1 2# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): # app_specific_action(webdriver, datasets)
bzt bitbucket.yml
command to ensure that all Selenium actions including app_specific_action
are successful.After adding your custom app-specific actions, you should now be ready to run the required tests for the Marketplace Data Center Apps Approval process. To do this, you'll need an enterprise-scale environment.
It is recommended to terminate a development environment before creating an enterprise-scale environment. Follow Terminate development environment instructions.
The installation of 4-nodes Bitbucket requires 48 CPU Cores. Make sure that the current EC2 CPU limit is set to higher number of CPU Cores. AWS Service Quotas service shows the limit for All Standard Spot Instance Requests. Applied quota value is the current CPU limit in the specific region.
The limit can be increased by creating AWS Support ticket. To request the limit increase fill in Amazon EC2 Limit increase request form:
Parameter | Value |
---|---|
Limit type | EC2 Instances |
Severity | Urgent business impacting question |
Region | US East (Ohio) or your specific region the product is going to be deployed in |
Primary Instance Type | All Standard (A, C, D, H, I, M, R, T, Z) instances |
Limit | Instance Limit |
New limit value | The needed limit of CPU Cores |
Case description | Give a small description of your case |
Select the Contact Option and click Submit button. |
AWS Pricing Calculator provides an estimate of usage charges for AWS services based on certain information you provide. Monthly charges will be based on your actual usage of AWS services, and may vary from the estimates the Calculator has provided.
*The prices below are approximate and may vary depending on factors such as (region, instance type, deployment type of DB, etc.)
Stack | Estimated hourly cost ($) |
---|---|
One Node Bitbucket DC | 1.4 - 2.0 |
Two Nodes Bitbucket DC | 1.7 - 2.5 |
Four Nodes Bitbucket DC | 2.4 - 3.6 |
Data dimensions and values for an enterprise-scale dataset are listed and described in the following table.
Data dimensions | Value for an enterprise-scale dataset |
---|---|
Projects | ~25 000 |
Repositories | ~52 000 |
Users | ~25 000 |
Pull Requests | ~ 1 000 000 |
Total files number | ~750 000 |
Below process describes how to install enterprise-scale Bitbucket DC with "large" dataset included:
Create access keys for IAM user.
Do not use root
user credentials for cluster creation. Instead, create an admin user.
Navigate to dc-app-performance-toolkit/app/util/k8s
folder.
Set AWS access keys created in step1 in aws_envs
file:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set required variables in dcapt.tfvars
file:
environment_name
- any name for you environment, e.g. dcapt-bitbucket-large
products
- bitbucket
bitbucket_license
- one-liner of valid bitbucket license without spaces and new line symbolsregion
- AWS region for deployment. Do not change default region (us-east-2
). If specific region is required, contact support.instance_types
- ["m5.4xlarge"]
Optional variables to override:
bitbucket_version_tag
- Bitbucket version to deploy. Supported versions see in README.md.From local terminal (Git bash terminal for Windows) start the installation (~40min):
1 2docker run --env-file aws_envs \ -v "$PWD/dcapt.tfvars:/data-center-terraform/config.tfvars" \ -v "$PWD/.terraform:/data-center-terraform/.terraform" \ -v "$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform ./install.sh -c config.tfvars
Copy product URL from the console output. Product url should look like http://a1234-54321.us-east-2.elb.amazonaws.com/bitbucket
.
New trial license could be generated on my atlassian.
Use this server id for generation BX02-9YO1-IN86-LO5G
.
All the datasets use the standard admin
/admin
credentials.
It's recommended to change default password from UI account page for security reasons.
Terminate cluster when it is not used for performance results generation.
For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance.
bitbucket.yml
configuration file. Set enterprise-scale Bitbucket Data Center parameters:Do not push to the fork real application_hostname
, admin_login
and admin_password
values for security reasons.
Instead, set those values directly in .yml
file on execution environment instance.
1 2application_hostname: test_bitbucket_instance.atlassian.com # Bitbucket DC hostname without protocol and port e.g. test-bitbucket.atlassian.com or localhost application_protocol: http # http or https application_port: 80 # 80, 443, 8080, 7990 etc secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate application_postfix: /bitbucket # e.g. /bitbucket for TerraForm deployment url like `http://a1234-54321.us-east-2.elb.amazonaws.com/bitbucket`. Leave this value blank for url without postfix. admin_login: admin admin_password: admin load_executor: jmeter # only jmeter executor is supported concurrency: 20 # number of concurrent virtual users for jmeter scenario test_duration: 50m ramp-up: 10m # time to spin all concurrent users total_actions_per_hour: 32700 # number of total JMeter actions per hour
Push your changes to the forked repository.
Ubuntu Server 22.04 LTS
.c5.2xlarge
30
GiBConnect to the instance using SSH or the AWS Systems Manager Sessions Manager.
1 2ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
Install Docker. Setup manage Docker as a non-root user.
Clone forked repository.
At this stage app-specific actions are not needed yet. Use code from master
branch with your bitbucket.yml
changes.
You'll need to run the toolkit for each test scenario in the next section.
Using the Data Center App Performance Toolkit for Performance and scale testing your Data Center app involves two test scenarios:
Each scenario will involve multiple test runs. The following subsections explain both in greater detail.
This scenario helps to identify basic performance issues without a need to spin up a multi-node Bitbucket DC. Make sure the app does not have any performance impact when it is not exercised.
To receive performance baseline results without an app installed:
Use SSH to connect to execution environment.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
View the following main results of the run in the dc-app-performance-toolkit/app/results/bitbucket/YY-MM-DD-hh-mm-ss
folder:
results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionpytest.*
: logs of Pytest-Selenium executionReview results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To receive performance results with an app installed:
Install the app you want to test.
Setup app license.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To generate a performance regression report:
virtualenv
as described in dc-app-performance-toolkit/README.md
ubuntu
) to access Docker generated reports:
1 2sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
dc-app-performance-toolkit/app/reports_generation
folder.performance_profile.yml
file:
1 2python csv_chart_generator.py performance_profile.yml
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the .csv
file (with consolidated scenario results), the .png
chart file and performance scenario summary report.Use scp command to copy report artifacts from execution env to local drive:
1 2export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
./reports
folder you will be able to review the action timings with and without your app to see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.The purpose of scalability testing is to reflect the impact on the customer experience when operating across multiple nodes. For this, you have to run scale testing on your app.
For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Bitbucket DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Bitbucket DC app in a cluster.
To receive scalability benchmark results for one-node Bitbucket DC with app-specific actions:
Apply app-specific code changes to a new branch of forked repo.
Use SSH to connect to execution environment.
Pull cloned fork repo branch with app-specific actions.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use AWS Service Quotas service to see current limit. EC2 CPU Limit section has instructions on how to increase limit if needed.
To receive scalability benchmark results for two-node Bitbucket DC with app-specific actions:
dc-app-performance-toolkit/app/util/k8s
folder.dcapt.tfvars
file and set bitbucket_replica_count
value to 2
.1 2docker run --pull=always --env-file aws_envs \ -v "$PWD/dcapt.tfvars:/data-center-terraform/config.tfvars" \ -v "$PWD/.terraform:/data-center-terraform/.terraform" \ -v "$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform ./install.sh -c config.tfvars
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use AWS Service Quotas service to see current limit. EC2 CPU Limit section has instructions on how to increase limit if needed.
To receive scalability benchmark results for four-node Bitbucket DC with app-specific actions:
Scale your Bitbucket Data Center deployment to 4 nodes as described in Run 4.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To generate a scalability report:
ubuntu
) to access Docker generated reports:
1 2sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
dc-app-performance-toolkit/app/reports_generation
folder.scale_profile.yml
file:
virtualenv
(as described in dc-app-performance-toolkit/README.md
):
1 2python csv_chart_generator.py scale_profile.yml
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the .csv
file (with consolidated scenario results), the .png
chart file and summary report.Use scp command to copy report artifacts from execution env to local drive:
1 2export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
./reports
folder, you will be able to review action timings on Bitbucket Data Center with different numbers of nodes. If you see a significant variation in any action timings between configurations, we recommend taking a look into the app implementation to understand the root cause of this delta.It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Terminate development environment instructions.
Do not forget to attach performance testing results to your ECOHELP ticket.
profile.csv
, profile.png
, profile_summary.log
and profile run result archives. Archives
should contain all raw data created during the run: bzt.log
, selenium/jmeter/locust logs, .csv and .yml files, etc.For Terraform deploy related questions see Troubleshooting tipspage.
If the installation script fails on installing Helm release or any other reason, collect the logs, zip and share to community Slack #data-center-app-performance-toolkit channel. For instructions on how to collect detailed logs, see Collect detailed k8s logs.
In case of the above problem or any other technical questions, issues with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.
Rate this page: