Building Blocks Exporters & Importers need to expect the same type of app data for them to be compatible; we call this language the Export Contract.
Atlassian provides a generic exporter for Data Center (DC) apps. Just because this exporter is in use doesn't mean that an importer designed for the generic exporter will be compatible - the Export Contract is also influenced by the type of app data present in the DC instance being exported.
Here, we provide a description of how the generic DC exporter translates from the data it finds in the DC to the Event Files it exports. This is useful in two ways: firstly, you can build your importer to support exports from DC. But secondly, we recommend following the same format when you build an exporter (if possible), so that it will be compatible with your importer. This will also make it easier to leverage future configurable Atlassian functionality such as Atlassian-supported import / export from Forge Storage.
Most SQL tables in the DC, including your app-specific AO tables, are included in the export. Importers should be robust to handling unexpected additional tables, and should ignore them without failing.
One or more Event Files are exported for each SQL table, but each Event File will only contain data for a single SQL table. SQL data is exported by firstly taking a snapshot, and then replaying change events since the snapshot. To ensure correct processing, you should observe the proper ordering of data:
SQL Event Files are named using the pattern:
1 2{tableName}_{phase}_{chunk}.ndjson
Where:
tableName is the SQL table name (e.g. AO_E5321B_STORY_POINTS).phase is either full (snapshot) or incremental (change events captured after the snapshot).chunk is a zero-based integer index. A single table's data may be split across multiple Event Files (chunks) if the data is large.For example: AO_E5321B_STORY_POINTS_full_0.ndjson, AO_E5321B_STORY_POINTS_incremental_0.ndjson.
You can identify the table an Event File belongs to by inspecting the table field in the header row (described below), or by using the
fullName field in the File Listing.
SQL Event Files are JSONL files. The first line is a header row describing the schema of the table. Each subsequent line is a data row representing an upsert or deletion of a single SQL row.
The header row has "TYPE": "METADATA" and contains the following fields:
| Field | Type | Description |
|---|---|---|
TYPE | string | Always "METADATA" for the header row. |
table | string | The SQL table name. |
columns | array | An array of column descriptor objects (see below). |
primaryKeys | array of strings | The names of the primary key column(s) for this table. |
rowCount | number | The number of data rows in this Event File. |
Each object in the columns array has the following fields:
| Field | Type | Description |
|---|---|---|
name | string | The column name. |
type | string | The column's data type. One of "number", "varchar", "bool", or "blob". |
size | number | The size or precision of the column. -1 indicates no fixed length (e.g. for text columns). |
nullable | boolean | Whether the column allows null values. |
Example header row:
1 2{"TYPE":"METADATA","table":"AO_E5321B_STORY_POINTS","columns":[{"name":"ID","type":"number","size":64,"nullable":false},{"name":"ISSUE_ID","type":"number","size":64,"nullable":true},{"name":"NAME","type":"varchar","size":-1,"nullable":true}],"primaryKeys":["ID"],"rowCount":2}
Each data row has "TYPE": "DATA" and contains all column values for the row, plus two additional control fields:
| Field | Type | Description |
|---|---|---|
TYPE | string | Always "DATA" for data rows. |
deleted | boolean | true if this row represents a deletion; false for an upsert. |
txnSequence | number | A monotonically increasing sequence number representing the order of this change. Snapshot rows always have txnSequence set to 0. Change event rows have a positive value derived from the database transaction timestamp. |
| (column values) | varies | The values for each column in the table. For delete rows, only the primary key column(s) are present; non-key columns are omitted. |
Example upsert row:
1 2{"TYPE":"DATA","ID":42,"ISSUE_ID":1001,"NAME":"my story","deleted":false,"txnSequence":0}
Example delete row (only primary key columns are included):
1 2{"TYPE":"DATA","ID":42,"deleted":true,"txnSequence":1713354044123456789}
A complete two-row SQL Event File (header + one upsert + one delete) looks like this:
1 2{"TYPE":"METADATA","table":"AO_E5321B_STORY_POINTS","columns":[{"name":"ID","type":"number","size":64,"nullable":false},{"name":"NAME","type":"varchar","size":-1,"nullable":true}],"primaryKeys":["ID"],"rowCount":2} {"TYPE":"DATA","ID":42,"NAME":"my story","deleted":false,"txnSequence":0} {"TYPE":"DATA","ID":7,"deleted":true,"txnSequence":1713354044123456789}
SQL Event Files do not currently set any custom properties in the File List entry. The properties field will be an empty JSON object:
1 2{}
Most files in the Jira / Confluence Home Directory, including your app-specific data, is included in the export. Importers should be robust to handling unexpected additional files, and should ignore them without failing.
For each export run, a single Event File is produced that lists all the home directory files that were exported. It is named using the pattern:
1 2{serverId}_{phase}_{chunk}.ndjson
Where:
serverId is the unique identifier of the source DC server.phase is either full (snapshot) or incremental (changed files since the previous export).chunk is a zero-based integer index. The listing may be split across multiple Event Files if the number of files is large.Filesystem Event Files use the same JSONL structure as SQL Event Files: the first line is a header row, and each subsequent line is a data row referencing a single file from the home directory.
The actual contents of each file are uploaded separately as Binary Files (see the Building Blocks overview). Data rows in the Event File reference the associated Binary File by ID so that you can download the file's contents.
The header row has "TYPE": "METADATA" and describes the schema of the data rows, using the same structure as SQL Event Files (see above).
The table field identifies the export source (typically the DC server ID). The columns array describes the fields present in each data
row, with each column having name, type, size, and nullable fields. The primaryKeys and rowCount fields follow the same
conventions as SQL Event Files.
Each data row has "TYPE": "DATA" and contains the following fields:
| Field | Type | Description |
|---|---|---|
TYPE | string | Always "DATA". |
deleted | boolean | true if this row represents a file that has been removed; false for an upsert. |
txnSequence | number | Ordering sequence number. Snapshot rows are 0; incremental rows have a positive value. |
| (path field) | string | The path of the file relative to the Jira/Confluence home directory (e.g. data/attachments/10000/my-file.pdf). The exact field name is defined in the header row's columns array and the primary key. |
| (binary file reference) | string | The ID of the associated Binary File. Use this ID to download the file's contents via the Binary Files API. This field is null for delete rows. The exact field name is defined in the header row's columns array. |
The exact column names for the path and binary file reference fields are described in the columns array of the header row. Always use the
header row as the authoritative source of the schema rather than assuming fixed field names, as these may evolve over time.
Filesystem Event Files do not currently set any custom properties in the File List entry.
Mappings describe the correspondence between identifiers in the source data, and identifiers at the destination. Each mapping has a namespace describing the type of data - you can find the list here: https://developer.atlassian.com/platform/app-migration/mappings/#mappings-namespaces-and-entities
Each Mappings Event File covers a single namespace. Files are named using the pattern:
1 2id_mapping_{namespace}_large_{count}_{timestamp}.json
Where:
namespace is the mapping namespace (e.g. jira:issue).count is the number of mapping records in the file.timestamp is the ISO 8601 creation timestamp.For example: id_mapping_jira:issue_large_1250_2026-04-17T10:30:00.000Z.json.
Mappings Event Files are JSONL files. The first line is a schema row describing the structure of the file. Each subsequent line is a data row containing a single source-to-destination ID mapping.
The schema row contains the following fields:
| Field | Type | Description |
|---|---|---|
metadata | boolean | Always true for the schema row. |
table | string | The mapping namespace (e.g. jira:issue). |
columns | array | An array of column descriptor objects describing the data row fields (see below). |
primaryKeys | array of strings | Always ["sourceId", "namespace"]. |
rowCount | number | The number of data rows in this file. |
exportTime | string | ISO 8601 timestamp indicating when the file was created. |
The columns array describes the fixed schema of every Mappings Event File:
| Column name | Type | Nullable | Description |
|---|---|---|---|
sourceId | varchar(255) | No | The entity identifier in the source DC instance. |
destinationId | varchar(255) | No | The entity identifier at the cloud destination. |
namespace | varchar(255) | No | The mapping namespace (e.g. jira:issue). |
migrationScopeId | varchar(255) | Yes | An identifier scoping this mapping to a specific migration. |
createdAtTimestamp | timestamp | No | Unix epoch milliseconds when the mapping was created. |
DELETED | bool | No | Always false for mappings (mappings are never deleted via this file). |
TXN_SEQUENCE | bigint | No | Monotonically increasing sequence number for ordering. |
Example schema row:
1 2{"metadata":true,"table":"jira:issue","columns":[{"name":"sourceId","type":"varchar","size":255,"nullable":false},{"name":"destinationId","type":"varchar","size":255,"nullable":false},{"name":"namespace","type":"varchar","size":255,"nullable":false},{"name":"migrationScopeId","type":"varchar","size":255,"nullable":true},{"name":"createdAtTimestamp","type":"timestamp","size":null,"nullable":false},{"name":"DELETED","type":"bool","size":1,"nullable":false},{"name":"TXN_SEQUENCE","type":"bigint","size":19,"nullable":false}],"primaryKeys":["sourceId","namespace"],"rowCount":3,"exportTime":"2026-04-17T10:30:00.000Z"}
Each data row contains the following fields:
| Field | Type | Description |
|---|---|---|
sourceId | string | The entity identifier in the source DC instance. |
destinationId | string | The entity identifier at the cloud destination. |
namespace | string | The mapping namespace (e.g. jira:issue). |
migrationScopeId | string | An identifier scoping this mapping to a specific migration. |
createdAtTimestamp | number | Unix epoch milliseconds when the mapping was created. |
DELETED | boolean | Always false for mappings. |
TXN_SEQUENCE | number | Monotonically increasing sequence number, used to order records across files. |
Example data row:
1 2{"sourceId":"10001","destinationId":"ari:cloud:jira::issue/abc123","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600000,"DELETED":false,"TXN_SEQUENCE":0}
A complete Mappings Event File with two records looks like this:
1 2{"metadata":true,"table":"jira:issue","columns":[{"name":"sourceId","type":"varchar","size":255,"nullable":false},{"name":"destinationId","type":"varchar","size":255,"nullable":false},{"name":"namespace","type":"varchar","size":255,"nullable":false},{"name":"migrationScopeId","type":"varchar","size":255,"nullable":true},{"name":"createdAtTimestamp","type":"timestamp","size":null,"nullable":false},{"name":"DELETED","type":"bool","size":1,"nullable":false},{"name":"TXN_SEQUENCE","type":"bigint","size":19,"nullable":false}],"primaryKeys":["sourceId","namespace"],"rowCount":2,"exportTime":"2026-04-17T10:30:00.000Z"} {"sourceId":"10001","destinationId":"ari:cloud:jira::issue/abc123","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600000,"DELETED":false,"TXN_SEQUENCE":0} {"sourceId":"10002","destinationId":"ari:cloud:jira::issue/def456","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600001,"DELETED":false,"TXN_SEQUENCE":1}
Each Mappings Event File sets the following properties in its File List entry:
| Property | Type | Description |
|---|---|---|
entityType | string | The mapping namespace this file covers (e.g. jira:issue). This is the same as the table field in the schema row. You can use this to determine which namespace a file covers without downloading it. |
migrationId | string | A UUID identifying the specific migration that produced this file. |
migrationScopeId | string | A UUID identifying the migration scope (the combination of source and destination) for this file. |
Example properties object for a Mappings Event File:
1 2{"entityType":"jira:issue","migrationId":"9dfc31b8-51e9-42f5-af21-e8ba6d6bc934","migrationScopeId":"a1b2c3d4-e5f6-7890-abcd-ef1234567890"}
This allows your Importer to filter Mappings Event Files by namespace without downloading each file — check the entityType property in the File List to determine which files are relevant to your app.
Rate this page: