Rate this page:
We're renaming the Export metrics API to App metrics API by 19 December 2023. See the Forge changelog for more details.
App metrics, which can be viewed in the developer console, show you how your Forge app is currently performing across all sites.
You can also use our Export metrics API to export these app metrics to several monitoring tools, including SignalFX and Datadog. These tools offer capabilities, like grouping and filtering metrics by different attributes and integrating with incident response tools.
The Export metrics API is an Atlassian GraphQL metrics API that provides metrics in the OTLP protobuf JSON format, which is the format used in the OpenTelemetry framework.
The following app metrics can be exported to monitoring tools via the Export metrics API:
Exporting app metrics involves the following steps:
Check out this repository for example code and resources for configuring monitoring tools to consume Forge app metrics.
Follow the instructions to authenticate here: Authenticate with the Atlassian GraphQL Gateway
You can use the sample queries below and try the Export metrics API at GraphQL Gateway for your Forge app. Ensure to input the corresponding properties in your queries.
We're renaming the Export metrics API to App metrics API by 19 December 2023. See the Forge changelog for more details.
In the sample queries below, both appMetrics
and exportMetrics
will work.
By 19 December 2023, only appMetrics
will work.
The sample queries return metrics in the OTLP protobuf JSON format, which is the format used in the OpenTelemetry framework.
1 2query Ecosystem($appId: ID!, $query: ForgeMetricsOtlpQueryInput!) { ecosystem { forgeMetrics(appId: $appId) { appMetrics(query: $query) { ... on ForgeMetricsOtlpData { resourceMetrics } ... on QueryError { message identifier extensions { statusCode errorType } } } } } }
1 2{ "appId": "ari:cloud:ecosystem::app/8ce114f4-d82c-45e2-b4fb-c6a0751d7d57", "query": { "filters": { "environments": ["8cb293d5-be08-47ae-a75c-95b89da5ad1d"], "interval": { "start": "2023-06-18T02:55:00.000Z", "end": "2023-06-18T02:57:00.000Z" }, "metrics": ["FORGE_API_REQUEST_COUNT", "FORGE_API_REQUEST_LATENCY", "FORGE_BACKEND_INVOCATION_LATENCY", "FORGE_BACKEND_INVOCATION_COUNT", "FORGE_BACKEND_INVOCATION_ERRORS"] } } }
1 2{ "Authorization": "Basic base64<email:token>", "User-Agent": "ForgeMetricsExportServer/1.0.0", "X-ExperimentalApi": "ForgeMetricsQuery" }
1 2{ "data": { "ecosystem": { "forgeMetrics": { "appMetrics": { "resourceMetrics": [ { "resource": {}, "schemaUrl": "https://opentelemetry.io/schemas/1.9.0", "scopeMetrics": [ { "metrics": [ { "name": "forge_api_request_count", "description": "", "sum": { "aggregationTemporality": 1, "dataPoints": [ { "asInt": 8, "attributes": [ { "key": "appId", "value": { "stringValue": "a11dfa0b-cf2c-44d1-9080-5c3944961223" } }, { "key": "contextAri", "value": { "stringValue": "ari:cloud:compass::site/04c5a385-0899-4edc-93a8-ada653b7c534" } }, { "key": "environmentId", "value": { "stringValue": "6f5f56e9-55c0-4551-9247-ee1484340f64" } }, { "key": "provider", "value": { "stringValue": "app" } }, { "key": "remote", "value": { "stringValue": "stargate" } }, { "key": "status", "value": { "stringValue": "2xx" } }, { "key": "url", "value": { "stringValue": "/forge/entities/graphql" } } ], "startTimeUnixNano": "1698720840000000000", "timeUnixNano": "1698720900000000000" } ] }, "unit": "s" }, { "name": "forge_backend_invocation_count", "description": "", "sum": { "aggregationTemporality": 1, "dataPoints": [ { "asInt": 70, "attributes": [ { "key": "appId", "value": { "stringValue": "8ce114f4-d82c-45e2-b4fb-c6a0751d7d57" } }, { "key": "appVersion", "value": { "stringValue": "4.64.0" } }, { "key": "contextAri", "value": { "stringValue": "ari:cloud:confluence::site/13095d29-407d-47ec-aa57-76764a470f36" } }, { "key": "environmentId", "value": { "stringValue": "8cb293d5-be08-47ae-a75c-95b89da5ad1d" } }, { "key": "functionKey", "value": { "stringValue": "updateStatusTitle" } } ], "startTimeUnixNano": "1687497375656000000", "timeUnixNano": "1687497375662000000" } ] }, "unit": "s" }, { "name": "forge_backend_invocation_errors", "description": "", "sum": { "aggregationTemporality": 1, "dataPoints": [ { "asInt": 0, "attributes": [ { "key": "appId", "value": { "stringValue": "8ce114f4-d82c-45e2-b4fb-c6a0751d7d57" } }, { "key": "appVersion", "value": { "stringValue": "5.1.0" } }, { "key": "contextAri", "value": { "stringValue": "ari:cloud:compass::site/6a9ea14f-759d-4f4a-b3ac-11395d8bf519" } }, { "key": "environmentId", "value": { "stringValue": "8cb293d5-be08-47ae-a75c-95b89da5ad1d" } }, { "key": "errorType", "value": { "stringValue": "UNHANDLED_EXCEPTION" } }, { "key": "functionKey", "value": { "stringValue": "process-app-event" } }, { "key": "moduleKey", "value": { "stringValue": "app-event-webtrigger" } } ], "startTimeUnixNano": "1687488960000000000", "timeUnixNano": "1687489020000000000" } ] }, "unit": "s" } ] } ] } ] } } } } }
Property | Type | Required | Description |
---|---|---|---|
appId | string | Yes |
A unique identifier for your forge app which can be found in the app's Regex: |
filters | Filters | Yes | Filters to fetch metrics as required. See Filters. |
Property | Type | Required | Description |
---|---|---|---|
environments | Array<string> | Yes |
A list of environment UUIDs for which metrics needs to be fetched.
Regex: |
interval | Interval | Yes | Time range for which metrics needs to be fetched. |
metrics | Array<enum> | Yes |
A list of enums of metrics to be fetched.
Possible values are: |
Each API call retrieves at most 15 minutes of metrics. You can run a query for up to 14 days in the past. This limit is enforced to make sure the number of data points returned is not huge in the API response.
We recommend fetching data periodically, for example, every three or five minutes. A rate limit of five calls per minute per user is enforced.
Property | Type | Required | Description |
---|---|---|---|
start | string | Yes | Start time in ISO-8601 format |
end | string | Yes | End time in ISO-8601 format |
To consume the Atlassian GraphQL API and ingest metrics in real-time into the monitoring tool, we recommend having the following components in your infrastructure:
The CronJob service periodically polls the exposed GraphQL endpoint for the required metrics. The AGG endpoint returns the OTLP protobuf JSON standard format as a response. The same response is then pushed as is to the OTEL Sidecar, which is running alongside this cron service.
When setting up the service, you can use either a serverless framework or server framework.
If using Amazon Web Services (AWS) infrastructure, you can configure Lambda to be executed every “x” minutes or so. You can also use a similar configuration for Google Cloud Platform (GCP) or Microsoft Azure infrastructure.
A sample Lambda configuration should look like the following:
1 2``` MyLambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: MyLambdaFunction Runtime: nodejs14.x Handler: index.handler Code: S3Bucket: my-function-bucket S3Key: my-function-package.zip Layers: - !Ref OTelLambdaLayer Environment: Variables: OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/config.yml MyScheduledRule: Type: AWS::Events::Rule Properties: Description: My scheduled rule ScheduleExpression: rate(3 minutes) State: ENABLED Targets: - Arn: !GetAtt MyLambdaFunction.Arn Id: MyLambdaTarget ```
If using AWS infrastructure, you can set up a dedicated EC2 resource running a server that polls the AGG API every “x” minutes or so. This can be a virtual machine (VM) if running an on-premise data center.
Next, run an OTEL Collector using the configuration of three components:
When setting up the service, you can use either a serverless framework or server framework.
If using AWS infrastructure, you can leverage the OTEL lambda layer. You can also use a similar configuration for GCP or Microsoft Azure infrastructure.
A sample configuration should look like the following:
1 2``` Resources: OTelLambdaLayer: Type: AWS::Lambda::LayerVersion Properties: LayerName: OTelLambdaLayer Description: My OTEL Lambda layer Content: S3Bucket: my-layer-bucket S3Key: my-layer-package.zip CompatibleRuntimes: - nodejs14.x MyLambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: MyLambdaFunction Runtime: nodejs14.x Handler: index.handler Code: S3Bucket: my-function-bucket S3Key: my-function-package.zip Layers: - !Ref OTelLambdaLayer Environment: Variables: OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/config.yml MyScheduledRule: Type: AWS::Events::Rule Properties: Description: My scheduled rule ScheduleExpression: rate(3 minutes) State: ENABLED Targets: - Arn: !GetAtt MyLambdaFunction.Arn Id: MyLambdaTarget ```
We recommend you run the OTEL Collector as a sidecar docker container on the same VM/EC2 server responsible for cron scheduling.
To set up a server framework:
Create a sample otel-collector-config.yaml
file in the repository as needed. The config file
should look similar to this (we're using SignalFX as an example third-party monitoring tool here):
1 2receivers: otlp: protocols: http: exporters: signalfx: # Access token to send data to SignalFx. access_token: <access_token> # SignalFx realm where the data will be received. realm: us1 # Timeout for the send operations. timeout: 30s processors: batch: service: pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [signalfx]
Create a Docker image with the open source OTEL collector
docker image
available using: docker build . -t otel-sidecar:v1
1 2FROM otel/opentelemetry-collector-contrib:latest # Copy the collector configuration file into the container COPY otel-collector-config.yaml /etc/otel-collector-config.yaml # Start the collector with the specified configuration file CMD ["--config=/etc/otel-collector-config.yaml"]
Run the above Docker image: docker run -p 4318:4318 otel-sidecar:v1
This will spin up the OTEL sidecar at http://localhost:4318
.
Make an HTTP POST request with the response of the above AGG API endpoint, for example,
response.data.ecosystem.forgeMetrics.appMetrics
, to the sidecar running at path
http://localhost:4318/v1/metrics
on the same server.
1 2curl --location --request POST 'localhost:4318/v1/metrics' \ --header 'Content-Type: application/json' \ --data-raw '<response.data.ecosystem.forgeMetrics.appMetrics>'
App metrics should now be visible in your configured monitoring tool.
Rate this page: