This guide describes how app developers can comply with user privacy requirements, as detailed by the General Data Protection Regulation (GDPR). On this page, you'll find information on your responsibilities as an app developer for Atlassian and instructions on how to meet these responsibilities.
In addition to this guide, you should read Data privacy guidelines for general guidelines on user privacy and Marketplace apps.
The GDPR governs the processing of personal data of individuals by an individual, company, or organization. As an app developer, you must ensure that your apps comply with the GDPR when handling the personal data for users. This includes:
In order to comply with these requirements, we recommend that your apps do not store any user personal data and always retrieve current user data at the time of use using Atlassian APIs. This is the simplest and most reliable solution, as you don't need to worry about managing and reporting user personal data for your apps. If you choose this approach, you don't need to read the rest of this guide.
However, if you choose to store user personal data with your apps, Atlassian has built the following capabilities to help you comply with the GDPR:
Read the following sections on reporting data and storing data to learn how to use these capabilities.
As an app developer, you are required to periodically report the user personal data that your apps are
storing. You must report each accountId
, which is a short hand reference to an Atlassian Account ID.
An accountId
uniquely identifies a user across all Atlassian products. An accountID
is 1-128 characters
long, and may contain alphanumeric characters, as well as -
and :
characters.
You must
use accountIds
to report personal data usage, even if the API permits other identifiers.
At a high level, this is how to do reporting for your apps:
The cycle period defines the required period of time between sending reports for a given accountId
.
You can think about it as the maximum allowable staleness of the reported data that is stored by Atlassian.
By default, the cycle period is 7 days.
However, the polling resources may return a different cycle
period (in the Cycle-Period
header) that you must follow instead.
You should not send reports more frequently than the cycle period for each accountId
.
When setting up reporting for your apps, also consider the following recommendations:
In addition to reporting user personal data for your apps, you must ensure that you are storing user personal data for your apps correctly:
The following accountIds should be used by partners for testing:
5be24ad8b1653240376955d2
5be24ba3f91c106033269289
There is no fixed accountId that can be used to test for the updated case.
Atlassian will be monitoring correct usage of the API detailed in this guide. For example, our systems will detect the case of apps repeatedly checking the status of a closed account beyond a reasonable time frame. For this reason, repeated/regression testing of the closed account should only be done using the closed test account provided above since we have added this accountid to the blocklist from our anomaly detection logic.
The personal data reporting API is a RESTful API that allows apps to report the user accounts for which they are storing personal data. For flexibility and efficiency, the API allows multiple accounts to be reported on in a single request.
POST https://api.atlassian.com/app/report-accounts/
This endpoint is used by apps to report a list of user accounts, and returns information on whether the personal data for each account needs to be updated or erased.
Add the scope report:personal-data
to your app manifest to
access the personal data reporting API.
Learn more about permissions.
This operation has no parameters.
Content type: application/json
Each request allows up to 90 accounts to be reported on. For each account, the
accountId
and time that the personal data was retrieved must be provided. The time format is
defined by
RFC 3339, section 5.6.
Example request (application/json):
1 2{ "accounts": [{ "accountId": "account-id-a", "updatedAt": "2018-10-25T23:08:51.382Z" }, { "accountId": "account-id-b", "updatedAt": "2018-10-25T23:14:44.231Z" }, { "accountId": "account-id-c", "updatedAt": "2018-12-01T02:44:21.020Z" }] }
200
(application/json): The request is successful and one or
more personal data erasure actions are required.
The information is contained in an accounts
array, where:
accountId
, andIn the case of the latter, the app is permitted to request personal data again.
Example response (application/json):
1 2{ "accounts": [{ "accountId": "account-id-a", "status": "closed" }, { "accountId": "account-id-c", "status": "updated" }] }
204
: The request was successful and the app has no action to take for the
accounts sent in the request.
400
: The request was malformed in some way. The response body contains
an error message.
Example response (application/json):
1 2{ "errorType": "string", "errorMessage": "string" }
429
: Rate limiting applies. Delay by the time period is specified in the Retry-After
header (in seconds)
before making the API call again.
500
: An internal server error occurred. The response body contains an error message.
Example response (application/json):
1 2{ "errorType": "string", "errorMessage": "string" }
503
: The service is unavailable.
The Forge modules and APIs available to you today can be combined together to provide GDPR capabilities on top of the Forge platform. How these pieces are composed will likely depend on the situation that your app is in, in regards to infrastructure and where the data is being stored.
We include a sample implementation here.
This targets the use case where personal data is only being stored in Forge storage or product entity properties.
We use the following:
We assume data is being stored in Forge storage with the following shape:
1 2interface Account { references: string[]; accountId: string; displayName: string; emailAddress: string; updatedAt?: string; }
This data is stored with an account:${accountId}
key format, allowing us to retrieve all accounts
by finding all of the keys starting with account:
Forge includes the ability to schedule work at regular intervals using scheduled triggers. In the example above, we combine both a weekly and an hourly schedule to implement the flow.
The weekly schedule essentially performs an extract, transform, and load operation.
On a weekly cadence, the trigger fires and searches for all accounts present in Forge storage. It then processes the accounts using the polling API to determine which ones to delete and to update. These accounts are then put into an account processing "queue" built over the top of Forge's storage mechanism.
We provide a sample queue implementation here.
In the future, this should be replaced by a Forge solution to background processing, which is being tracked on the FRGE-242 ticket.
The hourly schedule is used to perform account update or deletion operations, based on the results of the weekly schedule.
The implementation above combines the extraction of Atlassian accounts and their processing into one
call. You may find this is infeasible for your app, either due to the particular data shape in storage
or due to the nature of your app. These operations can be split apart - for example, maintaining
a list of all the accountIds
you store in another storage entry that is processed by the weekly
schedule against the polling API.
If you are storing or processing data outside of Atlassian, we assume you have external infrastructure available to help implement this flow (or can provision it as required).
While the examples above extract the list of accounts from Forge storage, the same sort of extraction could be performed on external data stores using tooling, such as a cron job. This could be used to generate a list of accounts that are fetched by the weekly schedule and then processed within the Forge app.
Rate this page: