Developer
News and Updates
Get Support
Sign in
Get Support
Sign in
DOCUMENTATION
Cloud
Data Center
Resources
Sign in
Sign in
DOCUMENTATION
Cloud
Data Center
Resources
Sign in
Last updated Apr 10, 2026

Experiment: App Migration Building Blocks - DC Format

Building Blocks Exporters & Importers need to expect the same type of app data for them to be compatible; we call this language the Export Contract.

Atlassian provides a generic exporter for Data Center (DC) apps. Just because this exporter is in use doesn't mean that an importer designed for the generic exporter will be compatible - the Export Contract is also influenced by the type of app data present in the DC instance being exported.

Here, we provide a description of how the generic DC exporter translates from the data it finds in the DC to the Event Files it exports. This is useful in two ways: firstly, you can build your importer to support exports from DC. But secondly, we recommend following the same format when you build an exporter (if possible), so that it will be compatible with your importer. This will also make it easier to leverage future configurable Atlassian functionality such as Atlassian-supported import / export from Forge Storage.

SQL Export

Most SQL tables in the DC, including your app-specific AO tables, are included in the export. Importers should be robust to handling unexpected additional tables, and should ignore them without failing.

One or more Event Files are exported for each SQL table, but each Event File will only contain data for a single SQL table. SQL data is exported by firstly taking a snapshot, and then replaying change events since the snapshot. To ensure correct processing, you should observe the proper ordering of data:

  • Event Files listed later in the File Listing (for the same SQL table) are chronologically after Event Files listed earlier, and may override data in earlier Event Files. For example, if the first Event File in the File Listing has a row that creates an SQL row, and the tenth Event File has a row that updates that same SQL row, the updated values from the tenth file apply.
  • Event File rows listed later in the same Event File are chronologically after earlier rows in the same file, and may override earlier data. For example, if the hundredth row of an Event File created a new SQL row, but the two hundredth row deleted it, then the row should end up being deleted (unless created again in a later row).

SQL Event File naming

SQL Event Files are named using the pattern:

1
2
{tableName}_{phase}_{chunk}.ndjson

Where:

  • tableName is the SQL table name (e.g. AO_E5321B_STORY_POINTS).
  • phase is either full (snapshot) or incremental (change events captured after the snapshot).
  • chunk is a zero-based integer index. A single table's data may be split across multiple Event Files (chunks) if the data is large.

For example: AO_E5321B_STORY_POINTS_full_0.ndjson, AO_E5321B_STORY_POINTS_incremental_0.ndjson.

You can identify the table an Event File belongs to by inspecting the table field in the header row (described below), or by using the fullName field in the File Listing.

SQL Event File format

SQL Event Files are JSONL files. The first line is a header row describing the schema of the table. Each subsequent line is a data row representing an upsert or deletion of a single SQL row.

Header row

The header row has "TYPE": "METADATA" and contains the following fields:

FieldTypeDescription
TYPEstringAlways "METADATA" for the header row.
tablestringThe SQL table name.
columnsarrayAn array of column descriptor objects (see below).
primaryKeysarray of stringsThe names of the primary key column(s) for this table.
rowCountnumberThe number of data rows in this Event File.

Each object in the columns array has the following fields:

FieldTypeDescription
namestringThe column name.
typestringThe column's data type. One of "number", "varchar", "bool", or "blob".
sizenumberThe size or precision of the column. -1 indicates no fixed length (e.g. for text columns).
nullablebooleanWhether the column allows null values.

Example header row:

1
2
{"TYPE":"METADATA","table":"AO_E5321B_STORY_POINTS","columns":[{"name":"ID","type":"number","size":64,"nullable":false},{"name":"ISSUE_ID","type":"number","size":64,"nullable":true},{"name":"NAME","type":"varchar","size":-1,"nullable":true}],"primaryKeys":["ID"],"rowCount":2}

Data rows

Each data row has "TYPE": "DATA" and contains all column values for the row, plus two additional control fields:

FieldTypeDescription
TYPEstringAlways "DATA" for data rows.
deletedbooleantrue if this row represents a deletion; false for an upsert.
txnSequencenumberA monotonically increasing sequence number representing the order of this change. Snapshot rows always have txnSequence set to 0. Change event rows have a positive value derived from the database transaction timestamp.
(column values)variesThe values for each column in the table. For delete rows, only the primary key column(s) are present; non-key columns are omitted.

Example upsert row:

1
2
{"TYPE":"DATA","ID":42,"ISSUE_ID":1001,"NAME":"my story","deleted":false,"txnSequence":0}

Example delete row (only primary key columns are included):

1
2
{"TYPE":"DATA","ID":42,"deleted":true,"txnSequence":1713354044123456789}

A complete two-row SQL Event File (header + one upsert + one delete) looks like this:

1
2
{"TYPE":"METADATA","table":"AO_E5321B_STORY_POINTS","columns":[{"name":"ID","type":"number","size":64,"nullable":false},{"name":"NAME","type":"varchar","size":-1,"nullable":true}],"primaryKeys":["ID"],"rowCount":2}
{"TYPE":"DATA","ID":42,"NAME":"my story","deleted":false,"txnSequence":0}
{"TYPE":"DATA","ID":7,"deleted":true,"txnSequence":1713354044123456789}

SQL Event File properties

SQL Event Files do not currently set any custom properties in the File List entry. The properties field will be an empty JSON object:

1
2
{}

Home directory export

Most files in the Jira / Confluence Home Directory, including your app-specific data, is included in the export. Importers should be robust to handling unexpected additional files, and should ignore them without failing.

Filesystem Event File naming

For each export run, a single Event File is produced that lists all the home directory files that were exported. It is named using the pattern:

1
2
{serverId}_{phase}_{chunk}.ndjson

Where:

  • serverId is the unique identifier of the source DC server.
  • phase is either full (snapshot) or incremental (changed files since the previous export).
  • chunk is a zero-based integer index. The listing may be split across multiple Event Files if the number of files is large.

Filesystem Event File format

Filesystem Event Files use the same JSONL structure as SQL Event Files: the first line is a header row, and each subsequent line is a data row referencing a single file from the home directory.

The actual contents of each file are uploaded separately as Binary Files (see the Building Blocks overview). Data rows in the Event File reference the associated Binary File by ID so that you can download the file's contents.

Header row

The header row has "TYPE": "METADATA" and describes the schema of the data rows, using the same structure as SQL Event Files (see above). The table field identifies the export source (typically the DC server ID). The columns array describes the fields present in each data row, with each column having name, type, size, and nullable fields. The primaryKeys and rowCount fields follow the same conventions as SQL Event Files.

Data rows

Each data row has "TYPE": "DATA" and contains the following fields:

FieldTypeDescription
TYPEstringAlways "DATA".
deletedbooleantrue if this row represents a file that has been removed; false for an upsert.
txnSequencenumberOrdering sequence number. Snapshot rows are 0; incremental rows have a positive value.
(path field)stringThe path of the file relative to the Jira/Confluence home directory (e.g. data/attachments/10000/my-file.pdf). The exact field name is defined in the header row's columns array and the primary key.
(binary file reference)stringThe ID of the associated Binary File. Use this ID to download the file's contents via the Binary Files API. This field is null for delete rows. The exact field name is defined in the header row's columns array.

The exact column names for the path and binary file reference fields are described in the columns array of the header row. Always use the header row as the authoritative source of the schema rather than assuming fixed field names, as these may evolve over time.

Filesystem Event File properties

Filesystem Event Files do not currently set any custom properties in the File List entry.

Mappings export

Mappings describe the correspondence between identifiers in the source data, and identifiers at the destination. Each mapping has a namespace describing the type of data - you can find the list here: https://developer.atlassian.com/platform/app-migration/mappings/#mappings-namespaces-and-entities

Mappings Event File naming

Each Mappings Event File covers a single namespace. Files are named using the pattern:

1
2
id_mapping_{namespace}_large_{count}_{timestamp}.json

Where:

  • namespace is the mapping namespace (e.g. jira:issue).
  • count is the number of mapping records in the file.
  • timestamp is the ISO 8601 creation timestamp.

For example: id_mapping_jira:issue_large_1250_2026-04-17T10:30:00.000Z.json.

Mappings Event File format

Mappings Event Files are JSONL files. The first line is a schema row describing the structure of the file. Each subsequent line is a data row containing a single source-to-destination ID mapping.

Schema row

The schema row contains the following fields:

FieldTypeDescription
metadatabooleanAlways true for the schema row.
tablestringThe mapping namespace (e.g. jira:issue).
columnsarrayAn array of column descriptor objects describing the data row fields (see below).
primaryKeysarray of stringsAlways ["sourceId", "namespace"].
rowCountnumberThe number of data rows in this file.
exportTimestringISO 8601 timestamp indicating when the file was created.

The columns array describes the fixed schema of every Mappings Event File:

Column nameTypeNullableDescription
sourceIdvarchar(255)NoThe entity identifier in the source DC instance.
destinationIdvarchar(255)NoThe entity identifier at the cloud destination.
namespacevarchar(255)NoThe mapping namespace (e.g. jira:issue).
migrationScopeIdvarchar(255)YesAn identifier scoping this mapping to a specific migration.
createdAtTimestamptimestampNoUnix epoch milliseconds when the mapping was created.
DELETEDboolNoAlways false for mappings (mappings are never deleted via this file).
TXN_SEQUENCEbigintNoMonotonically increasing sequence number for ordering.

Example schema row:

1
2
{"metadata":true,"table":"jira:issue","columns":[{"name":"sourceId","type":"varchar","size":255,"nullable":false},{"name":"destinationId","type":"varchar","size":255,"nullable":false},{"name":"namespace","type":"varchar","size":255,"nullable":false},{"name":"migrationScopeId","type":"varchar","size":255,"nullable":true},{"name":"createdAtTimestamp","type":"timestamp","size":null,"nullable":false},{"name":"DELETED","type":"bool","size":1,"nullable":false},{"name":"TXN_SEQUENCE","type":"bigint","size":19,"nullable":false}],"primaryKeys":["sourceId","namespace"],"rowCount":3,"exportTime":"2026-04-17T10:30:00.000Z"}

Data rows

Each data row contains the following fields:

FieldTypeDescription
sourceIdstringThe entity identifier in the source DC instance.
destinationIdstringThe entity identifier at the cloud destination.
namespacestringThe mapping namespace (e.g. jira:issue).
migrationScopeIdstringAn identifier scoping this mapping to a specific migration.
createdAtTimestampnumberUnix epoch milliseconds when the mapping was created.
DELETEDbooleanAlways false for mappings.
TXN_SEQUENCEnumberMonotonically increasing sequence number, used to order records across files.

Example data row:

1
2
{"sourceId":"10001","destinationId":"ari:cloud:jira::issue/abc123","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600000,"DELETED":false,"TXN_SEQUENCE":0}

A complete Mappings Event File with two records looks like this:

1
2
{"metadata":true,"table":"jira:issue","columns":[{"name":"sourceId","type":"varchar","size":255,"nullable":false},{"name":"destinationId","type":"varchar","size":255,"nullable":false},{"name":"namespace","type":"varchar","size":255,"nullable":false},{"name":"migrationScopeId","type":"varchar","size":255,"nullable":true},{"name":"createdAtTimestamp","type":"timestamp","size":null,"nullable":false},{"name":"DELETED","type":"bool","size":1,"nullable":false},{"name":"TXN_SEQUENCE","type":"bigint","size":19,"nullable":false}],"primaryKeys":["sourceId","namespace"],"rowCount":2,"exportTime":"2026-04-17T10:30:00.000Z"}
{"sourceId":"10001","destinationId":"ari:cloud:jira::issue/abc123","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600000,"DELETED":false,"TXN_SEQUENCE":0}
{"sourceId":"10002","destinationId":"ari:cloud:jira::issue/def456","namespace":"jira:issue","migrationScopeId":"migration-scope-uuid","createdAtTimestamp":1713348600001,"DELETED":false,"TXN_SEQUENCE":1}

Mappings Event File properties

Each Mappings Event File sets the following properties in its File List entry:

PropertyTypeDescription
entityTypestringThe mapping namespace this file covers (e.g. jira:issue). This is the same as the table field in the schema row. You can use this to determine which namespace a file covers without downloading it.
migrationIdstringA UUID identifying the specific migration that produced this file.
migrationScopeIdstringA UUID identifying the migration scope (the combination of source and destination) for this file.

Example properties object for a Mappings Event File:

1
2
{"entityType":"jira:issue","migrationId":"9dfc31b8-51e9-42f5-af21-e8ba6d6bc934","migrationScopeId":"a1b2c3d4-e5f6-7890-abcd-ef1234567890"}

This allows your Importer to filter Mappings Event Files by namespace without downloading each file — check the entityType property in the File List to determine which files are relevant to your app.

Rate this page: