App logs, which can be viewed in the developer console, help in tracking down and troubleshooting issues that app users may be experiencing. Forge app owners and app contributors can view app logs.
You can also use the App logs API to export app logs to several observability tools, including Splunk, Datadog, Dynatrace, New Relic, and more. Such tools offer advanced capabilities for analyzing and managing logs.
The App logs API is a REST API that provides logs in OTLP log data model format, which is the format used in the OpenTelemetry framework.
Exporting app logs involves the following steps:
Check out this repository for example code and resources for configuring observability tools to consume Forge app logs.
You must first authenticate with the Atlassian Gateway to consume the API and export app logs to a tool of your choice. For this you need to generate the API tokens to access the App logs API.
Only the owner of the Forge app or an app contributor that has access to logs can make the request. It is recommended to use a non human account (bot account), instead of an admin account, which has access to the app logs.
To generate the API tokens:
You can use the below API Spec and try the App logs API with your Forge app.
The API return logs in the OTLP format, which is the format used in the OpenTelemetry framework.
1 2https://api.atlassian.com/v1/app/logs/${appId}?environmentId=${envId}&startDate=${startDate}&endDate=${endDate}&cursor=${cursor}
1 2GET
1 2- `appId`: string, Required, Id of the forge App.
To get the app ID:
1 2- `environmentId`: string, Required, environment Id of the forge App. - `startDate`: string (ISO format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'), Required, Start date and time for the logs in UTC. - `endDate`: string (ISO format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'), Required, End date and time for the logs in UTC. - `level`: string, Optional, the log level(TRACE, DEBUG, INFO, WARN, ERROR, FATAL) - `cursor` : string, Optional, the marker retrieved from the previous request, to fetch the next set of logs
To get the environment ID:
1 2- `200 OK`: Successful response. Returns paginated logs. - `400 Bad Request`: Request failed with status code 400. - `401 Unauthorized`: Unauthorized. - `404 Not Found`: Request failed with status code 404. - `429 Too many requests`: Request has been rate limited. - `500 Internal Server Error`: Request failed with status code 500.
startDate
and endDate
is 1 hour. This means the
endDate
must not exceed 1 hour after the startDate
.startDate
and endDate
must be within the last 14 days from the current date and time.
This means any date-time specified that is more than 14 days in the past will not be accepted.appId
is enforced.1 2// Please replace `email`, `appId`, `envId` and `<api_token>` with your actual values. // This code will fetch data for the last 5 minutes and // if the response contains a cursor, it will do a subsequent fetch with the new cursor. // This will continue until no more cursors are returned. // Define necessary variables const email = "<email>"; const appId = "<appId>"; const envId = "<envId>"; const api_token = "<api_token>"; // Get current date/time and subtract 5 minutes for startDate const now = new Date(); const endDate = new Date(now.getTime() - 1 * 60000); // 1 minute ago const startDate = new Date(now.getTime() - 6 * 60000); // 6 minutes ago let cursor = null; // Function to fetch logs const fetchLogs = async (startDate, endDate, cursor) => { try { const url = `https://api.atlassian.com/v1/app/logs/${appId}` + `?environmentId=${envId}` + `&startDate=${startDate.toISOString()}` + `&endDate=${endDate.toISOString()}` + `&level=INFO&level=ERROR` + `${cursor ? `&cursor=${cursor}` : ""}`; const response = await fetch(url, { method: "GET", headers: { Authorization: `Basic ${Buffer.from(`${email}:${api_token}`).toString("base64")}`, Accept: "application/json", }, }); console.log(`Response: ${response.status} ${response.statusText}`); const data = await response.json(); // export your logs to the external monitoring tool console.log(data); // if data.cursor exists, fetch next data if (data.cursor) { await fetchLogs(startDate, endDate, data.cursor); } } catch (err) { console.error(err); } }; // Call fetchLogs function fetchLogs(startDate, endDate, cursor);
1 2{ "appLogs": [ { "timeUnixNano": "1707821444939000000", "severityNumber": 30, "severityText": "INFO", "body": { "stringValue": "This is simple log message" }, "traceId": "3e1c350520934cbeb20b7d54d56bee2c", "spanId": "6c7f6ad7436700c1", "attributes": [ { "key": "appId", "value": { "stringValue": "yibeb59-d217-58d3-a3a7-0a888b3bc5ef" } }, { "key": "environmentId", "value": { "stringValue": "0129990-850f-1a19-a013-12cdefe2fa19" } }, { "key": "invocationId", "value": { "stringValue": "e1f88a1e-1b59-1511-adfb-e080972d5d89" } }, { "key": "installationContext", "value": { "stringValue": "ari:cloud:confluence::site/089a1455-4ea0-122a-b70c-5b17360f047d" } }, { "key": "appVersion", "value": { "stringValue": "1.206.0" } }, { "key": "functionKey", "value": { "stringValue": "updateStatusTitle" } }, { "key": "moduleType", "value": { "stringValue": "core:function" } }, { "key": "arguments", "value": { "stringValue": "[{\"randomData\":0.6341547823420093}]" } } ] } ], "cursor": "someString" }
To use the App logs API and ingest logs into observability tools, we recommend fetching logs in OTLP format from the API, and having the following components in your infrastructure:
The CronJob service periodically polls the exposed REST endpoint for the required logs. The API returns logs in OTLP format as a response. Logs are then pushed as is to the OTEL Sidecar, which is running alongside this cron service.
When setting up the service, you can use either a serverless framework or server framework.
If using Amazon Web Services (AWS) infrastructure, you can configure a Lambda to be executed every “x” minutes or so. You can also use a similar configuration for Google Cloud Platform (GCP) , Microsoft Azure infrastructure or any other cloud provider
A sample Lambda configuration should look like the following:
1 2Resources: # IAM Role for Lambda execution MyLambdaExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole Path: "/" Policies: - PolicyName: S3AccessPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - s3:GetObject Resource: arn:aws:s3:::my-s3-bucket/* - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: arn:aws:logs:*:*:* # Lambda Function MyLambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: MyLambdaFunction Runtime: nodejs16.x Handler: index.handler Role: !GetAtt MyLambdaExecutionRole.Arn Code: S3Bucket: my-s3-bucket S3Key: my-function-package.zip Layers: - !Ref OTelLambdaLayer Timeout: 60 # Timeout set to 1 minute Environment: Variables: OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/otel-collector-config.yaml # Event Rule for Lambda Invocation MyLambdaInvocationRule: Type: "AWS::Events::Rule" Properties: Description: Invoke Lambda every 5 minutes ScheduleExpression: "rate(5 minutes)" State: ENABLED Targets: - Arn: !GetAtt MyLambdaFunction.Arn Id: MyLambdaInvoke # Permissions for Lambda Invocation PermissionForEventsToInvokeLambda: Type: "AWS::Lambda::Permission" Properties: FunctionName: !GetAtt MyLambdaFunction.Arn Action: "lambda:InvokeFunction" Principal: events.amazonaws.com SourceArn: !GetAtt MyLambdaInvocationRule.Arn
If using AWS infrastructure, you can set up a dedicated EC2 resource running a server that polls the REST API every “x” minutes or so. This can be a virtual machine (VM) if running an on-premise data center.
Next, run an OTEL Collector/Sidecar using the configuration of three components:
If using AWS infrastructure, you can leverage the OTEL lambda layer. You can also use a similar configuration for GCP or Microsoft Azure infrastructure.
A sample configuration should look like the following:
1 2Resources: # IAM Role for Lambda execution MyLambdaExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole Path: "/" Policies: - PolicyName: S3AccessPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - s3:GetObject Resource: arn:aws:s3:::my-s3-bucket/* - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: arn:aws:logs:*:*:* # Lambda Layer OTelLambdaLayer: Type: AWS::Lambda::LayerVersion Properties: LayerName: OTelLambdaLayer Description: My OTEL Lambda layer Content: S3Bucket: my-s3-bucket S3Key: my-layer-package.zip CompatibleRuntimes: - nodejs16.x # Lambda Function MyLambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: MyLambdaFunction Runtime: nodejs16.x Handler: index.handler Role: !GetAtt MyLambdaExecutionRole.Arn Code: S3Bucket: my-s3-bucket S3Key: my-function-package.zip Layers: - !Ref OTelLambdaLayer Timeout: 60 # Timeout set to 1 minute Environment: Variables: OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/otel-collector-config.yaml # Event Rule for Lambda Invocation MyLambdaInvocationRule: Type: "AWS::Events::Rule" Properties: Description: Invoke Lambda every 5 minutes ScheduleExpression: "rate(5 minutes)" State: ENABLED Targets: - Arn: !GetAtt MyLambdaFunction.Arn Id: MyLambdaInvoke # Permissions for Lambda Invocation PermissionForEventsToInvokeLambda: Type: "AWS::Lambda::Permission" Properties: FunctionName: !GetAtt MyLambdaFunction.Arn Action: "lambda:InvokeFunction" Principal: events.amazonaws.com SourceArn: !GetAtt MyLambdaInvocationRule.Arn
We recommend you run the OTEL Collector as a sidecar docker container on the same VM/EC2 server responsible for cron scheduling.
To set up a server framework:
Create a sample otel-collector-config.yaml
file in the repository as needed. The config file
should look similar to this (we're using Datadog as an example third-party monitoring tool here):
1 2receivers: otlp: protocols: http: exporters: datadog: api: key: "<API key>" service: pipelines: logs: receivers: [otlp] exporters: [datadog]
Create a Docker image with the open source OTEL collector
docker image
available using: docker build . -t otel-sidecar:v1
1 2FROM otel/opentelemetry-collector-contrib:latest # Copy the collector configuration file into the container COPY otel-collector-config.yaml /etc/otel-collector-config.yaml # Start the collector with the specified configuration file CMD ["--config=/etc/otel-collector-config.yaml"]
Run the above Docker image: docker run -p 4318:4318 otel-sidecar:v1
This will spin up the OTEL sidecar at http://localhost:4318
.
Make an HTTP POST request with the response of the REST API to the sidecar running at path
http://localhost:4318/v1/logs
on the same server.
You need to create a json
object here
using the below format. Add the array of appLogs
in the logRecords
field. Refer this example
1 2const logs = { resourceLogs: [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "my.service" } } ] }, "scopeLogs": [ { "scope": { "name": "my.library", "version": "1.0.0", "attributes": [ { "key": "my.scope.attribute", "value": { "stringValue": "some scope attribute" } } ] }, logRecords: "<Place appLogs received from REST API Call>", }, ], }, ], };
After creating a json
object, you can use a HTTP POST call to send logs to your tool of choice.
1 2curl --location --request POST 'localhost:4318/v1/logs' \ --header 'Content-Type: application/json' \ --data <logs>'
App logs should now be visible in your configured monitoring tool.
Rate this page: