The Imports workflow will walk you through a custom workflow (built on a HATEOAS framework) that is designed to help you quickly and easily import data into Assets.
The workflow includes the following steps:
We recommend building integrations with JSM Assets using Atlassian Forge. For more information on how to integrate with Assets using Forge, see: Importing Third Party data into Assets using Forge.
With the support of Connect JWT token provided by Connect framework and platform, Connect app should integrate with JSM Asset using Connect JWT instead of container token. The integration process is the same as the process detailed below except for step 1 & 2 which should be replaced by obtaining a Connect JWT token. Please refer to Authorization for more details on different authorization model supported by JSM Assets.
In this step, you will create an import source in Assets and generate an container token for it. You will then provide the token to the external app. Learn more about how to create a container token.
In this step, the external app will verify the container token that you have supplied is valid and was generated for an Assets import source. The external application can do this by performing a GET request to the imports info endpoint in Assets:
1 2GET https://api.atlassian.com/jsm/assets/v1/imports/info Accept: application/json Authorization: Bearer EXAMPLE_TOKEN
Learn more about security when using the Imports REST API.
If the token is valid and can be used, Assets will respond with a set of links that can be used to continue with the workflow:
1 2{ "links": { "getStatus": "https://api.atlassian.com/jsm/assets/workspace/...", "start": "https://api.atlassian.com/jsm/assets/workspace/...", "mapping": "https://api.atlassian.com/jsm/assets/workspace/..." } }
These three links give you the full set of URLs you need in order to:
getStatus
)start
)mapping
)Note that the URLs returned by this endpoint are to be used exclusively with the token that was used to obtain them, and they shouldn't be stored or manipulated.
External apps can perform a GET request to the getStatus
endpoint (with URL as obtained in step 2). This endpoint
returns a JSON object that represents the current status of the import source:
1 2{ "status": "IDLE" }
The mapping configuration could be in any of these states:
IDLE
(the import source is ready, and an import can be started)RUNNING
(an import is currently running)DISABLED
(the customer has disabled this import source, so data can't be imported)MISSING_MAPPING
(no object schema and mapping configuration has been submitted yet, so data can't be imported)Third party tools must submit their schema and mapping configuration prior to running imports. External import sources
will fail to start, and will report in MISSING_MAPPING
status until this happens.
The URL to submit the mapping was obtained in step 2, as the mapping
link, and can be invoked with a PUT
:
1 2PUT https://api.atlassian.com/jsm/assets/workspace/... Accept: application/json Authorization: Bearer EXAMPLE_TOKEN Content-Type: application/json { ... schema and mapping ... }
The schema and mapping can be modified later on by invoking the same URL with a PATCH
call.
See the object schema and mapping guide for more information on how to write an object schema and mapping configuration for external imports.
If you would only like to update data without updating the structure, you can use the mapping without identifying the schema, as long as the objectType is properly identified in your object schema configuration file.
When ready to start an import, the third party app should perform a POST request to the URL obtained in step 2
as start
1 2POST https://api.atlassian.com/jsm/assets/workspace/... Accept: application/json Authorization: Bearer EXAMPLE_TOKEN
If the import source was in the IDLE
status, Assets will create a new import execution. The response will be:
1 2{ "result": "success", "links": { "submitProgress": "https://api.atlassian.com/...", "submitResults": "https://api.atlassian.com/...", "getExecutionStatus": "https://api.atlassian.com/...", "cancel": "https://api.atlassian.com/..." } }
This set of links can be used to:
submitProgress
)submitResults
)getExecutionStatus
)cancel
)Please note that an import execution expires after 24 hours. All data sets must be submitted within that timeframe.
As the external tool is now generating data from is internal or external sources, it can report back to Assets with progress information. This can provide for a better user experience - specially if the customer doesn't have access to the external app's UI but they can access Assets.
This operation can be invoked repeatedly while the import hasn't been transitioned to PROCESSING
, and each call
overwrites the previous ones. To report progress, invoke the submitProgress
URL obtained in step 4 with a PUT
:
1 2PUT https://api.atlassian.com/... Accept: application/json Authorization: Bearer EXAMPLE_TOKEN Content-Type: application/json { "steps": { "total": 3, "current": 2, "description": "Gathering data" }, "objects": { "total": 500, "processed": 125 } }
At least one of steps
and objects
must be present.
The data to import is submitted by the external app to Assets in a series of data sets - or chunks. The app can submit as many chunks as it needs - they will be stored and processed only when the end of data is signaled.
Please remember that payloads are case-sensitive. For example, using "Data" instead of "data" in the example below will produce an error.
Data chunks can be submitted by performing a POST
request to the submitResults
link obtained in step 4. The request
body
can optionally contain a field clientGeneratedId
. If present, Assets will ignore the request if a chunk with the same
identified has already been submitted. You can therefore ensure idempotent calls to Assets by always assigning the same
clientGeneratedId
to the same chunks of data, even when you retry a request due a network error for example.
1 2{ "data": { ... your JSON import data ... }, "clientGeneratedId": "a-unique-id" }
This operation will create the chunk and store it - data won't be processed yet. The external application can continue to submit chunks and report progress, until data completion is signaled.
Individual chunks cannot be larger than 250MB, and the total size of all chunks cannot be larger than 250MB.
You can see an example of schema and mapping configuration along with a matching data chunk in the object schema and mapping guide.
In order to signal data completion, the external application must perform a request to the submitResults
endpoint with
a
completed field set to true
:
1 2{ "completed": true }
Assets will now start processing the submitted data chunks. No further data chunks or progress reports can be provided for this import execution, and no other import execution can be created for the same import source until this process is finished.
Customers can track the status of their import using Assets' user interface. Once data processing is complete the import
source will return to IDLE
status, so a new import can be subsequently started in the future by repeating step 4.
Note: You can combine a chunk with the use of the completed
field, by submitting data and marking the import as
completed all at once:
1 2{ "data": { ... import data ... }, "clientGeneratedId": "a-unique-id", "completed": true }
This is equivalent to invoking the endpoint twice, first with the data chunk and then with the data completion signal.
Please note that an import execution expires after 24 hours. All data chunks must be submitted, and data submission completion must be signalled within that timeframe.
Apps can use their own structure in the data
field of their chunks. Assets will strip the root data
field, and store
its
whole value as its own JSON file. It will then be processed the same way a JSON import would process such file.
Therefore, your selectors and mapping shouldn't include the data
prefix.
Once the import is marked as started by step 5b, the third party app can perform a GET
request to the getExecutionStatus
URL obtained in 5b.
1 2GET https://api.atlassian.com/jsm/assets/workspace/... Accept: application/json Authorization: Bearer EXAMPLE_TOKEN
An example of the returned payload:
1 2{ "status": "DONE", "progressResult": { "type": "IMPORT", "id": 438, "started": "2023-06-15T12:13:03.952+00:00", "ended": "2023-06-15T12:13:05.124+00:00", "objectSchemaId": 266, "result": "OK", "status": "FINISHED", "infoMessage": "No data to import", "jobId": "241d9437-eb25-4008-a346-8daafbb91026", "importSourceId": "3e9f17a6-61db-4bdd-a313-6dd38c81bfca", "populatedObjectTypes": [ "Operating System", "Hard Drive" ], "onlyExecutedForObjectTypes": [], "objectTypeResultMap": { "2548": { "id": null, "objectTypeName": "Operating System", "objectTypeId": 2548, "objectsUpdated": 0, "objectsCreated": 2, "objectsIdentical": 0, "objectsMissingUpdated": 0, "objectsMissingDeleted": 0, "entriesInSource": 2, "duplicateEnries": 0, "emptyLabelEntries": 0, "emptyExternalIdEntries": 0, "objectsFilteredWithQlQuery": 0, "errorMessages": null, "readExternalDataTimeInMs": 220, "mapExternalDataTimeInMs": 0, "qlQueryFilteringTimeInMs": 0, "decidingActionsTimeInMs": 0, "writeInsightDataTimeInMs": 266, "postFunctionTimeInMs": 0, "executionTimeInMs": 486, "objectsWithUpdatedReferences": 0 }, "2547": { "id": null, "objectTypeName": "Hard Drive", "objectTypeId": 2547, "objectsUpdated": 0, "objectsCreated": 1, "objectsIdentical": 0, "objectsMissingUpdated": 0, "objectsMissingDeleted": 0, "entriesInSource": 1, "duplicateEnries": 0, "emptyLabelEntries": 0, "emptyExternalIdEntries": 0, "objectsFilteredWithQlQuery": 0, "errorMessages": null, "readExternalDataTimeInMs": 221, "mapExternalDataTimeInMs": 0, "qlQueryFilteringTimeInMs": 0, "decidingActionsTimeInMs": 0, "writeInsightDataTimeInMs": 154, "postFunctionTimeInMs": 0, "executionTimeInMs": 375, "objectsWithUpdatedReferences": 1 } }, "errorMessages": null, "totalNumberEntriesInImport": 3 } }
The progressResult
field will only be present when status
is DONE
.
Following are all the different status
values:
INGESTING
(the import execution is currently accepting data chunks via submitResults
URL from 5b)PROCESSING
(the import execution has been notified of the final data chunk submission and has started processing all data chunks to write to Assets)DONE
(the import execution has completed)CANCELLED
(the import execution has been cancelled)External applications can cancel an import execution they have started as long as they haven't signalled data completion.
For this purpose, a DELETE
should be performed against the cancel
endpoint URL as obtained in step 4.
1 2DELETE https://api.atlassian.com/... Accept: application/json
This will return the import source to IDLE
, so a new import can be subsequently started in the future by repeating step
5b.
DELETE
actions can include an custom error message to be displayed in the Import History dialog within Assets. The error messages should be formatted in JSON as follows:
1 2{ "errors": [ { "message": "Unable to complete import for <reasons>." }, { "message": "Unable to complete import for <other-reasons>." } ] }
Rate this page: