Rate this page:
This document walks you through the process of testing your app on Bamboo using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on Enterprise-scale environment.
Enterprise-scale environment: Bamboo Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the below recommended parameters.
The installation of Bamboo requires 16 CPU Cores. Make sure that the current EC2 CPU limit is set to higher number of CPU Cores. AWS Service Quotas service shows the limit for All Standard Spot Instance Requests. Applied quota value is the current CPU limit in the specific region.
The limit can be increased by creating AWS Support ticket. To request the limit increase fill in Amazon EC2 Limit increase request form:
Parameter | Value |
---|---|
Limit type | EC2 Instances |
Severity | Urgent business impacting question |
Region | US East (Ohio) or your specific region the product is going to be deployed in |
Primary Instance Type | All Standard (A, C, D, H, I, M, R, T, Z) instances |
Limit | Instance Limit |
New limit value | The needed limit of CPU Cores |
Case description | Give a small description of your case |
Select the Contact Option and click Submit button. |
Below process describes how to install Bamboo DC with an enterprise-scale dataset included. This configuration was created specifically for performance testing during the DC app review process.
Create access keys for IAM user.
Do not use root
user credentials for cluster creation. Instead, create an admin user.
Navigate to dc-app-performance-toolkit/app/util/k8s
folder.
Set AWS access keys created in step1 in aws_envs
file:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set required variables in dcapt.tfvars
file:
environment_name
- any name for you environment, e.g. dcapt-bamboo
products
- bamboo
bamboo_license
- one-liner of valid bamboo license without spaces and new line symbols
region
- Do not change default region (us-east-2
). If specific region is required, contact support.
New trial license could be generated on my atlassian.
Use BX02-9YO1-IN86-LO5G
Server ID for generation.
From local terminal (Git bash terminal for Windows) start the installation (~40min):
1 2docker run --pull=always --env-file aws_envs \ -v "$PWD/dcapt.tfvars:/data-center-terraform/config.tfvars" \ -v "$PWD/.terraform:/data-center-terraform/.terraform" \ -v "$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform ./install.sh -c config.tfvars
Copy product URL from the console output. Product url should look like http://a1234-54321.us-east-2.elb.amazonaws.com/bamboo
.
Wait for all remote agents to be started and connected. It can take up to 10 minutes. Agents can be checked in Settings
> Agents
.
All the datasets use the standard admin
/admin
credentials.
Data dimensions and values for default enterprise-scale dataset uploaded are listed and described in the following table.
Data dimensions | Value for an enterprise-scale dataset |
---|---|
Users | 2000 |
Projects | 100 |
Plans | 2000 |
Remote agents | 50 |
Follow Terminate development environment instructions.
You are responsible for the cost of the AWS services running during the reference deployment. For more information, go to aws.amazon.com/pricing.
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
Data Center App Performance Toolkit has its own set of default test actions:
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
If your app introduces new functionality for Bamboo entities, for example new task, it is important to extend base dataset with your app specific functionality.
Follow installation instructions described in bamboo dataset generator README.md
Open app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/Main.java
and set:
BAMBOO_SERVER_URL
: url of Bamboo stackADMIN_USER_NAME
: username of admin user (default is admin
)Login as ADMIN_USER_NAME
, go to Profile > Personal access tokens and create a new token with the same
permissions as admin user.
Run following command:
1 2export BAMBOO_TOKEN=newly_generarted_token # for MacOS and Linux
or
1 2set BAMBOO_TOKEN=newly_generarted_token # for Windows
Open app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/service/generator/plan/PlanGenerator.java
file and modify plan template according to your app. e.g. add new task.
Navigate to app/util/bamboo/bamboo_dataset_generator
and start generation:
1 2./run.sh # for MacOS and Linux
or
1 2run # for Windows
Login into Bamboo UI and make sure that plan configurations were updated.
Default duration of the plan is 60 seconds. Measure plan duration with new app-specific functionality and modify
default_dataset_plan_duration
value accordingly in bamboo.yml
file.
For example, if plan duration with app-specific task became 70 seconds, than default_dataset_plan_duration
should be set to 70 seconds in bamboo.yml
file.
For example, you develop an app that adds some additional UI elements to view plan summary page. In this case, you should develop Selenium app-specific action:
Extend example of app-specific action in dc-app-performance-toolkit/app/extension/bamboo/extension_ui.py
.
Code example.
So, our test has to open plan summary page and measure time to load of this new app-specific element on the page.
If you need to run app_specific_action
as specific user uncomment app_specific_user_login
function in
code example.
Note, that in this case test_1_selenium_custom_action
should follow just before test_2_selenium_z_log_out
action.
In dc-app-performance-toolkit/app/selenium_ui/bamboo_ui.py
, review and uncomment the following block of code to make newly created app-specific actions executed:
1 2# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): # app_specific_action(webdriver, datasets)
Run toolkit with bzt bamboo.yml
command to ensure that all Selenium actions including app_specific_action
are successful.
Check that bamboo.yml
file has correct settings of application_hostname
, application_protocol
, application_port
, application_postfix
, etc.
Set desired execution percentage for standalone_extension
. Default value is 0
, which means that standalone_extension
action will not be executed.
For example, for app-specific action development you could set percentage of standalone_extension
to 100 and for all
other actions to 0 - this way only login_and_view_all_builds
and standalone_extension
actions would be executed.
Navigate to dc-app-performance-toolkit/app
folder and run from virtualenv(as described in dc-app-performance-toolkit/README.md
):
python util/jmeter/start_jmeter_ui.py --app bamboo
Open Bamboo
thread group > actions per login
and navigate to standalone_extension
Review existing stabs of jmeter_app_specific_action
:
app_id
and app_token
Modify examples or add new controllers according to your app main use case.
Right-click on View Results Tree
and enable this controller.
Click Start button and make sure that login_and_view_dashboard
and standalone_extension
are executed.
Right-click on View Results Tree
and disable this controller. It is important to disable View Results Tree
controller before full-scale results generation.
Click Save button.
To make standalone_extension
executable during toolkit run edit dc-app-performance-toolkit/app/bamboo.yml
and set
execution percentage of standalone_extension
accordingly to your use case frequency.
App-specific tests could be run (if needed) as a specific user. In the standalone_extension
uncomment
login_as_specific_user
controller. Navigate to the username:password
config element and update values for
app_specific_username
and app_specific_password
names with your specific user credentials. Also make sure that
you located your app-specific tests between login_as_specific_user
and
login_as_default_user_if_specific_user_was_loggedin
controllers.
Run toolkit to ensure that all JMeter actions including standalone_extension
are successful.
dc-app-performance-toolkit/app/extension/bamboo/extension_locust.py
,
so that test will call the endpoint with GET request, parse response use these data to call another endpoint with POST request and measure response time.dc-app-performance-toolkit/app/bamboo.yml
uncomment in execution
section scenario: locust_app_specific
to enable locust app-specific test execution.dc-app-performance-toolkit/app/bamboo.yml
set standalone_extension_locust
to 1
- app-specific action will be executed by every virtual user
of locust_app_specific
scenario. Default value is 0
, which means that standalone_extension_locust
action will not be executed.@run_as_specific_user(username='specific_user_username', password='specific_user_password')
decorator for that.bzt bamboo.yml
command to ensure that all Locust actions including locust_app_specific_action
are successful.
Note, that locust_app_specific_action
action execution will start in some time full after ramp period up is finished (in 5-6 min).For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance.
bamboo.yml
configuration file. Set enterprise-scale Bamboo Data Center parameters:Do not push to the fork real application_hostname
, admin_login
and admin_password
values for security reasons.
Instead, set those values directly in .yml
file on execution environment instance.
1 2application_hostname: bamboo_host_name or public_ip # Bamboo DC hostname without protocol and port e.g. test-bamboo.atlassian.com or localhost application_protocol: http # http or https application_port: 80 # 80, 443, 8080, 8085, etc secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate application_postfix: /bamboo # e.g. /babmoo in case of url like http://localhost:8085/bamboo admin_login: admin admin_password: admin load_executor: jmeter concurrency: 200 # number of concurrent threads to authenticate random users test_duration: 45m ramp-up: 3m total_actions_per_hour: 2000 # number of total JMeter actions per hour number_of_agents: 50 # number of available remote agents parallel_plans_count: 40 # number of parallel plans execution start_plan_timeout: 60 # maximum timeout of plan to start default_dataset_plan_duration: 60 # expected plan execution duration
Push your changes to the forked repository.
Ubuntu Server 22.04 LTS
.c5.2xlarge
30
GiBConnect to the instance using SSH or the AWS Systems Manager Sessions Manager.
1 2ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
Install Docker. Setup manage Docker as a non-root user.
Clone forked repository.
You'll need to run the toolkit for each test scenario in the next section.
This scenario helps to identify basic performance issues.
To receive performance baseline results without an app installed and without app-specific actions (use code from master
branch):
Use SSH to connect to execution environment.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
View the following main results of the run in the dc-app-performance-toolkit/app/results/bamboo/YY-MM-DD-hh-mm-ss
folder:
results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionlocust.*
: logs of the Locust tool executionReview results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
Performance results generation with the app installed (still use master branch):
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To receive results for Bamboo DC with app and with app-specific actions:
Apply app-specific code changes to a new branch of forked repo.
Use SSH to connect to execution environment.
Pull cloned fork repo branch with app-specific actions.
Run toolkit with docker from the execution environment instance:
1 2cd dc-app-performance-toolkit docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To generate a performance regression report:
virtualenv
as described in dc-app-performance-toolkit/README.md
ubuntu
) to access Docker generated reports:
1 2sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
dc-app-performance-toolkit/app/reports_generation
folder.bamboo_profile.yml
file:
runName: "without app"
, in the fullPath
key, insert the full path to results directory of Run 1.runName: "with app"
, in the fullPath
key, insert the full path to results directory of Run 2.runName: "with app and app-specific actions"
, in the fullPath
key, insert the full path to results directory of Run 3.1 2python csv_chart_generator.py bamboo_profile.yml
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the .csv
file
(with consolidated scenario results), the .png
chart file and bamboo performance scenario summary report.Use scp command to copy report artifacts from execution env to local drive:
From local machine terminal (Git bash terminal for Windows) run command:
1 2export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
Once completed, in the ./reports
folder you will be able to review the action timings with and without your app to
see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend
taking a look into the app implementation to understand the root cause of this delta.
It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Terminate development environment instructions.
Do not forget to attach performance testing results to your ECOHELP ticket.
profile.csv
, profile.png
, profile_summary.log
and profile run result archives. Archives
should contain all raw data created during the run: bzt.log
, selenium/jmeter/locust logs, .csv and .yml files, etc.For Terraform deploy related questions see Troubleshooting tipspage.
If the installation script fails on installing Helm release or any other reason, collect the logs, zip and share to community Slack #data-center-app-performance-toolkit channel. For instructions on how to collect detailed logs, see Collect detailed k8s logs.
In case of the above problem or any other technical questions, issues with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.
Rate this page: