This document walks you through the process of testing your app on Crowd using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on Enterprise-scale environment.
Enterprise-scale environment: Crowd Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the parameters prescribed below.
The installation of 4-pods DC environment and execution pod requires at least 24 vCPU Cores. Newly created AWS account often has vCPU limit set to low numbers like 5 vCPU per region. Check your account current vCPU limit for On-Demand Standard instances by visiting AWS Service Quotas page. Applied quota value is the current CPU limit in the specific region.
Make that current region limit is large enough to deploy new cluster. The limit can be increased by using Request increase at account-level button: choose a region, set a quota value which equals a required number of CPU Cores for the installation and press Request button. Recommended limit is 30.
Below process describes how to install Crowd DC with an enterprise-scale dataset included. This configuration was created specifically for performance testing during the DC app review process.
Create Access keys for AWS CLI:
Example Option 1 with Admin user:
AdministratorAccessAccess key and Secret access key in aws_envs fileExample Option 2 with granular Policies:
Go to AWS Console -> IAM service -> Policies
Create policy1 with json content of the policy1 file
Important: change all occurrences of 123456789012 to your real AWS Account ID.
Create policy2 with json content of the policy2 file
Important: change all occurrences of 123456789012 to your real AWS Account ID.
Go to User -> Create user -> Attach policies directly -> Attach policy1 and policy2-> Click on Create user button
Open newly created user -> Security credentials tab -> Access keys -> Create access key -> Command Line Interface (CLI) -> Create access key
Use Access key and Secret access key in aws_envs file
Clone Data Center App Performance Toolkit locally.
For annual review, always get the latest version of the DCAPT code from the master branch.
DCAPT supported versions: three latest minor version releases.
Navigate to dc-app-performance-toolkit/app/util/k8s folder.
Set AWS access keys created in step1 in aws_envs file:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN (only for temporary creds)Set required variables in dcapt.tfvars file:
environment_name - any name for you environment, e.g. dcapt-crowd
products - crowd
crowd_license - one-liner of valid crowd license without spaces and new line symbols
region - Do not change default region (us-east-2). If specific region is required, contact support.
New trial license could be generated on my atlassian.
Use BX02-9YO1-IN86-LO5G Server ID for generation.
From local terminal (Git Bash for Windows users) start the installation (~40min):
1 2docker run --pull=always --env-file aws_envs \ -v "/$PWD/dcapt.tfvars:/data-center-terraform/conf.tfvars" \ -v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \ -v "/$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform:2.9.10 ./install.sh -c conf.tfvars
Copy product URL from the console output. Product url should look like http://a1234-54321.us-east-2.elb.amazonaws.com/crowd.
Data dimensions and values for an enterprise-scale dataset are listed and described in the following table.
| Data dimensions | Value for an enterprise-scale dataset |
|---|---|
| Users | ~100 000 |
| Groups | ~15 |
All the datasets use the standard admin/admin credentials.
You are responsible for the cost of the AWS services running during the reference deployment. For more information, go to aws.amazon.com/pricing.
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
Data Center App Performance Toolkit has its own set of default JMeter test actions for Crowd Data Center.
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
JMeter app-specific actions development
Set up local environment for toolkit using the README.
Check that crowd.yml file has correct settings of application_hostname, application_protocol, application_port, application_postfix, etc.
Navigate to dc-app-performance-toolkit/app folder and follow start JMeter UI README:
python util/jmeter/start_jmeter_ui.py --app crowd
Open Crowd thread group and add new transaction controller.
Open newly added transaction controller, and add new HTTP requests (based on your app use cases) into it.
Run toolkit locally from dc-app-performance-toolkit/app folder with the command
bzt crowd.yml
Make sure that execution is successful.
Default TerraForm deployment configuration
already has a dedicated execution environment pod to run tests from. For more details see Execution Environment Settings section in dcapt.tfvars file.
Check the crowd.yml configuration file. If load configuration settings were changed for dev runs, make sure parameters
were changed back to the defaults:
1 2application_hostname: test_crowd_instance.atlassian.com # Crowd DC hostname without protocol and port e.g. test-crowd.atlassian.com or localhost application_protocol: http # http or https application_port: 80 # 80, 443, 8080, 4990, etc secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate application_postfix: /crowd # Default postfix value for TerraForm deployment url like `http://a1234-54321.us-east-2.elb.amazonaws.com/crowd` admin_login: admin admin_password: admin application_name: crowd application_password: 1111 load_executor: jmeter concurrency: 1000 # number of concurrent threads to authenticate random users test_duration: 45m
You'll need to run the toolkit for each test scenario in the next section.
Using the Data Center App Performance Toolkit for Performance and scale testing your Data Center app involves two test scenarios:
Each scenario will involve multiple test runs. The following subsections explain both in greater detail.
This scenario helps to identify basic performance issues without a need to spin up a multi-node Crowd DC. Make sure the app does not have any performance impact when it is not exercised.
To receive performance baseline results without an app installed and without app-specific actions (use code from master branch):
Before run:
crowd.yml and toolkit code base has default configuration from the master branch. No app-specific actions code applied.application_hostname, application_protocol, application_port and application_postfix in .yml file../dc-app-performance-toolkit/app/util/k8s/aws_envs file:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN (only for temporary creds)Navigate to dc-app-performance-toolkit folder and start tests execution:
1 2export ENVIRONMENT_NAME=your_environment_name
1 2docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.10 bash bzt_on_pod.sh crowd.yml
View the following main results of the run in the dc-app-performance-toolkit/app/results/crowd/YY-MM-DD-hh-mm-ss folder:
results_summary.log: detailed run summaryresults.csv: aggregated .csv file with all actions and timingsbzt.log: logs of the Taurus tool executionjmeter.*: logs of the JMeter tool executionReview results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To receive performance results with an app installed (still use master branch):
Install the app you want to test.
Setup app license.
Navigate to dc-app-performance-toolkit folder and start tests execution:
1 2export ENVIRONMENT_NAME=your_environment_name
1 2docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.10 bash bzt_on_pod.sh crowd.yml
Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To generate a performance regression report:
./app/reports_generation/performance_profile.yml file:
dc-app-performance-toolkit folder and run the following command from local terminal (Git Bash for Windows users) to generate reports:
1 2docker run --pull=always \ -v "/$PWD:/dc-app-performance-toolkit" \ --workdir="//dc-app-performance-toolkit/app/reports_generation" \ --entrypoint="python" \ -it atlassian/dcapt csv_chart_generator.py performance_profile.yml
./app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and performance scenario summary report.
If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.The purpose of scalability testing is to reflect the impact on the customer experience when operating across multiple nodes. For this, you have to run scale testing on your app.
For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Crowd DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Crowd DC app in a cluster.
To receive scalability benchmark results for one-node Crowd DC with app-specific actions:
Before run:
crowd.yml and toolkit code base has code base with your developed app-specific actions.application_hostname, application_protocol, application_port and application_postfix in .yml file../dc-app-performance-toolkit/app/util/k8s/aws_envs file:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN (only for temporary creds)Navigate to dc-app-performance-toolkit folder and start tests execution:
1 2export ENVIRONMENT_NAME=your_environment_name
1 2docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.10 bash bzt_on_pod.sh crowd.yml
Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Minimum recommended value is 30.
Use AWS Service Quotas service to see current limit for us-east-2 region.
EC2 CPU Limit section has instructions on how to increase limit if needed.
To receive scalability benchmark results for two-node Crowd DC with app-specific actions:
Navigate to dc-app-performance-toolkit/app/util/k8s folder.
Open dcapt.tfvars file and set crowd_replica_count value to 2.
From local terminal (Git Bash for Windows users) start scaling (~20 min):
1 2docker run --pull=always --env-file aws_envs \ -v "/$PWD/dcapt.tfvars:/data-center-terraform/conf.tfvars" \ -v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \ -v "/$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform:2.9.10 ./install.sh -c conf.tfvars
Edit run parameters for 2 nodes run. To do it, left uncommented only 2 nodes scenario parameters in crowd.yml file.
1 2# 1 node scenario parameters # ramp-up: 20s # time to spin all concurrent threads # total_actions_per_hour: 180000 # number of total JMeter actions per hour # 2 nodes scenario parameters ramp-up: 10s # time to spin all concurrent threads total_actions_per_hour: 360000 # number of total JMeter actions per hour # 4 nodes scenario parameters # ramp-up: 5s # time to spin all concurrent threads # total_actions_per_hour: 720000 # number of total JMeter actions per hour
Navigate to dc-app-performance-toolkit folder and start tests execution:
1 2export ENVIRONMENT_NAME=your_environment_name
1 2docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.10 bash bzt_on_pod.sh crowd.yml
Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Minimum recommended value is 30.
Use AWS Service Quotas service to see current limit for us-east-2 region.
EC2 CPU Limit section has instructions on how to increase limit if needed.
To receive scalability benchmark results for four-node Crowd DC with app-specific actions:
Scale your Crowd Data Center deployment to 4 nodes as described in Run 4.
Edit run parameters for 4 nodes run. To do it, left uncommented only 4 nodes scenario parameters crowd.yml file.
1 2# 1 node scenario parameters # ramp-up: 20s # time to spin all concurrent threads # total_actions_per_hour: 180000 # number of total JMeter actions per hour # 2 nodes scenario parameters # ramp-up: 10s # time to spin all concurrent threads # total_actions_per_hour: 360000 # number of total JMeter actions per hour # 4 nodes scenario parameters ramp-up: 5s # time to spin all concurrent threads total_actions_per_hour: 720000 # number of total JMeter actions per hour
Navigate to dc-app-performance-toolkit folder and start tests execution:
1 2export ENVIRONMENT_NAME=your_environment_name
1 2docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.10 bash bzt_on_pod.sh crowd.yml
Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
To generate a scalability report:
./app/reports_generation/scale_profile.yml file:
runName: "1 Node", in the relativePath key, insert the relative path to results directory of Run 3.runName: "2 Nodes", in the relativePath key, insert the relative path to results directory of Run 4.runName: "4 Nodes", in the relativePath key, insert the relative path to results directory of Run 5.dc-app-performance-toolkit folder and run the following command from local terminal (Git Bash for Windows users) to generate reports:
1 2docker run --pull=always \ -v "/$PWD:/dc-app-performance-toolkit" \ --workdir="//dc-app-performance-toolkit/app/reports_generation" \ --entrypoint="python" \ -it atlassian/dcapt csv_chart_generator.py scale_profile.yml
./app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and performance scenario summary report.
If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Terminate enterprise-scale environment instructions. In case of any problems with uninstall use Force terminate command.
Do not forget to attach performance testing results to your ECOHELP ticket.
profile.csv, profile.png, profile_summary.log and profile run result archives. Archives
should contain all raw data created during the run: bzt.log, selenium/jmeter/locust logs, .csv and .yml files, etc.If the installation script fails on installing Helm release or any other reason, collect the logs, zip and share to community Slack #data-center-app-performance-toolkit channel. For instructions on how to collect detailed logs, see Collect detailed k8s logs. For failed cluster uninstall use Force terminate command.
In case of any technical questions or issues with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.
Rate this page: