A Building Blocks Exporter moves data out of your app and sends it to Atlassian, so that it can be forwarded to a compatible Importer. The primary use case for building a custom Exporter is Cloud to Cloud migrations, where you need to export data from one cloud site and import it into another.
The future vision is that, as long as you produce data in a compatible format (export contract), your Exporter will be compatible with any Importer that supports the same contract. We recommend following the DC Export Format wherever possible, as this will maximise compatibility — including with any Atlassian-provided Importer pre-defined behaviours in the future.
In the future, you will be able to declare that your exporter exists and provide its details in your Forge manifest. However, until such a feature is developed, the registration of Building Blocks importers and exporters is managed manually by Atlassian staff. If we have not already given you a contact in relation to your implementation of Building Blocks, please reach out via ecosystem partner support, who will be able to arrange for the App Migrations Platform team to support you.
The following information will be required:
Please be aware that in the future, you will be required to move the above information into your Forge Manifest.
When your app is registered as having a Forge Exporter, you will receive a new event,
avi:ecosystem.migration:started:forge_app_export, each time an App Data Movement is initiated that requires your Exporter to run.
You will need to register to receive the event in your Forge Manifest using the trigger module.
When you receive the event, it will contain a movementDetails object with a appDataMovementId.
You will use this ID to identify the App Data Movement when calling the Exporter API endpoints described below.
The movementDetails object also contains a sourceLocation and a destinationLocation field with details about where the data is moving to.
The sourceLocation and destinationLocation are objects with a type field. A type value of cloud means it's a cloud site, and the
cloudUrl for the cloud site, identifying the data destination. You should only use the destinationLocation for logging purposes (and be aware
there may be future type values) to ensure your exporter is compatible with future importers.
For each App Data Movement, your Exporter should follow this workflow:
fileId and uploadId.
b. Upload the Event File content in one or more chunks using pre-signed upload URLs.
c. Finalize the Event File upload to commit it.
d. If the Event File references Binary Files, upload each Binary File using the same initialize / upload / finalize pattern.All API calls use your Forge app's authentication. The base URL for all Exporter endpoints is:
1 2https://api.atlassian.com/app/migration/forge/v1/buildingBlocks
Make a POST request to /eventFile/initialize with the following JSON body:
| Field | Type | Required | Description |
|---|---|---|---|
transferId | string | Yes | The transfer ID from the trigger event. Identifies the App Data Movement. |
fullName | string | Yes | A descriptive name for the Event File (e.g. following the DC format naming conventions). Used by the Importer to identify the file. |
incremental | boolean | Yes | false if this file is a full snapshot; true if it is a diff that may include updates and deletions. |
createdAt | string | Yes | ISO 8601 timestamp for when the file was created (e.g. "2026-04-17T10:30:00.000Z"). |
exporterARI | string | Yes | An ARI identifying your Exporter. Atlassian will provide this value when registering your Exporter. |
partExporterARI | string | No | An ARI identifying the specific sub-exporter (File Group) that produced this file. Atlassian will provide this value if applicable. |
properties | object | Yes | A JSON object of string key-value pairs containing any metadata about the file that will help the Importer decide whether to process it without downloading it. Pass an empty object {} if no metadata is needed. |
Example request body:
1 2{ "transferId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "fullName": "AO_E5321B_STORY_POINTS_full_0.ndjson", "incremental": false, "createdAt": "2026-04-17T10:30:00.000Z", "exporterARI": "ari:cloud:ecosystem::app/my-exporter-ari", "partExporterARI": null, "properties": {} }
The response will contain a fileId and an uploadId that you will use in subsequent calls:
1 2{ "fileId": "evt-file-uuid", "uploadId": "multipart-upload-id" }
The platform uses multi-part uploads. For each chunk of the Event File content, make a POST request to /eventFile/upload/url:
| Field | Type | Description |
|---|---|---|
fileId | string | The fileId from the initialize response. |
uploadId | string | The uploadId from the initialize response. |
chunkIndex | string | The zero-based index of this chunk (e.g. "0" for the first chunk). |
contentLength | string | The size of this chunk in bytes, as a string. |
contentSHA256 | string | The SHA-256 hex digest of this chunk's content. |
Example request body:
1 2{ "fileId": "evt-file-uuid", "uploadId": "multipart-upload-id", "chunkIndex": "0", "contentLength": "102400", "contentSHA256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" }
The response will contain a pre-signed URL:
1 2{ "url": "https://example.org/some-path?upload-parameters" }
Use an HTTP PUT request to upload the chunk content directly to this URL. Store the ETag response header value and the SHA-256 checksum
for each chunk — you will need them to finalize the upload.
For small files that fit in a single chunk, you will call this endpoint once with chunkIndex set to "0". For larger files, call it
once per chunk in order, incrementing chunkIndex each time.
Once all chunks have been uploaded, make a POST request to /eventFile/finalize:
| Field | Type | Description |
|---|---|---|
fileId | string | The fileId from the initialize response. |
uploadId | string | The uploadId from the initialize response. |
eTags | array of strings | The ETag values returned when uploading each chunk, in order. |
sha256CheckSumValues | array of strings | The SHA-256 hex digest of each chunk, in order. |
Example request body:
1 2{ "fileId": "evt-file-uuid", "uploadId": "multipart-upload-id", "eTags": ["\"abc123\""], "sha256CheckSumValues": ["e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"] }
A successful response returns HTTP 200 with an empty body.
If your Event File references Binary Files (for example, to represent attachments or on-disk files), you must upload each Binary File
separately. Binary Files are linked to their parent Event File by the parentEventFileId.
Binary Files follow the same initialize / upload / finalize pattern as Event Files, using these endpoints:
POST /binaryFile/initialize — body: {"parentEventFileId": "evt-file-uuid"}POST /binaryFile/upload/url — same fields as /eventFile/upload/urlPOST /binaryFile/finalize — same fields as /eventFile/finalizeThe initialize response returns a fileId for the Binary File. Your Event File data rows should reference this fileId so that Importers
can download the binary content.
You must finalize all Binary Files for an Event File before you signal that the File Group is complete (see below).
Once all Event Files (and their Binary Files) for a given File Group Label have been uploaded and finalized, notify the platform by making
a POST request to /subExporter/ready:
| Field | Type | Description |
|---|---|---|
appDataMovementId | string | The transferId from the trigger event. |
partExporterType | string | The File Group Label that is now complete. |
Example request body:
1 2{ "appDataMovementId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "partExporterType": "DATABASE" }
A successful response returns HTTP 202 with an empty body.
You must call this endpoint once for each File Group Label that your Exporter produces (as registered with Atlassian). The platform will not notify Importers of a File Group until it receives this signal. Importers can begin processing one File Group while you continue uploading Event Files for another.
Rate this page: