Rate this page:
Code insights is an API to accept data for a commit and display it to users viewing a pull request. There is both a Java and REST API available, so it is ideal for lots of different types of integrations - whether it be an in-product app triggered by a Java event, a microservice that is triggered by a webhook or even a script that runs as part of your CI system. This how-to explains the process for writing an integration that uses the Code Insights API to post a report and annotations to a pull request. If you are looking for existing integrations there are a number of existing tools that post code insights to Bitbucket Server. You can find them on the the Atlassian Marketplace.
The code insights feature provides an API for integrations to annotate a pull request with data. The data is saved in Bitbucket Server, and displayed in the form of a report and annotations in the code. A report is displayed on the overview tab of the pull request. It contains a title, pass/failed state, description and up to 6 data fields that can be used to display information that isn't specific to a given line of code. Annotations are associated with a report, they cannot be posted on their own. Annotations are attached to a specific line and file in the diff. They contain a severity level and message as well as optionally a link to your integration for the user to find more information about why this annotation was created.
As with the build status API, code insights are only displayed if they are associated with the latest commit on a pull request's source branch. This means that code insights tools should listen to updates to the branch (either via the Java events if the integration is in the form of an in-product Java app, or via webhooks if the integration is part of an external system) and create a new report every time the branch is updated.
While reports from the latest commit are always displayed, only annotations that appear on lines in the pull request diff will be displayed. This means that an integration can run their tool over the entire codebase and create annotations for issues found in all files, but only those that are relevant to the changes made in the pull request will be visible to the user.
In order to post insights, an integration must first choose an 'insight key' that will uniquely identify the insight report and associated annotations. Each insight key can only be associated with one report per commit. Because there is no registration step, the integration needs to have a name-spaced 'insight key' as to not clash with other integrations. We recommend using reverse DNS domain name notation combined with the slugified report title in order to ensure a unique insight key.
There is no any way to 'claim' an insight key therefore once a report has been created for a given insight key and commit, only the user who created it is able to modify, delete or add annotations to that report. Note that other users are still free to create reports with that insight key on other commits. For integrations using the REST API, it may be best to create a dedicated user for the integration and create insights as this user. For an in-product Java app, create a dedicated service user as to not use up the seat of a licensed user.
In order to create a report or annotations, the authenticated user must have repository read permission. For security reasons, we recommend creating a personal token with read-level permissions.
Reports enable integrations to give a high-level overview of the results of the analysis and display data that is not specific to any given file. A report must be created before any annotations are able to be created as annotations must be associated with an existing report.
A report contains:
The data fields on a report are intended to enable reporters to display up to 6 pieces of information that aren't already captured by the other fields. They contain a title, a value and optionally a type to describe the shape of the value field. The value can be either a primitive value, such as a number or a string, or a complex value described by the provided field. For example a duration type field will have a value in milliseconds, but be displayed in a human readable format, and a link type field will have the value as a map containing the hyperlink and link text, and will be displayed as a clickable link on the report. For a full description of the supported types see the 'Data parameters' section of the REST documentation.
To create a report via REST, do a PUT to with the request body containing the required fields. This can also be done using a Java app by calling the method on and passing it a .
To update the report, simply perform the same action as creating it (make sure the insight key and commit are both the same as the existing report) and the existing report will be overwritten by the provided report. The update must be performed by the same user that created the report. Remember that only the reports associated with the latest commit of a pull request will be displayed, so be sure to update the report after each push.
To delete do a DELETE to the report endpoint (for REST callers) or call the method on with a (for Java callers). Deleting a report will automatically delete all associated annotations. Reports (and associated annotations) are automatically deleted after 60 days and when repositories are deleted so there is no need for reporters to do their own cleanup of the reports they create.
Annotations enable integrations to highlight specific lines to display data from the result of an analysis. It is assumed that reporters will do an analysis on the source branch of a pull request, and as such might find issues on lines and files that aren't change by the pull request author. Because of this, only annotations that are on lines that have been changed in a pull request are displayed. Annotations can also be created on line 0 which will be displayed as a file level annotation on any file that has been modified.
After the report is created, up to 1000 annotations can be added to the report. This number can be configured at the instance level by setting the property . If the request would result in more than the maximum number of annotations being stored then the entire request is rejected and no new annotations are stored.
It is worth considering the behavior of the tool adding annotations to a report, as behavior such as re-running a build could result in duplicate annotations being created. In scenarios where this is an option we recommend that all the annotations are first deleted in bulk before new ones are created.
Individual annotations can only be modified if an external ID was provided when the report was created. To update, do a PUT to the individual annotation endpoint (for REST callers) or call the method on the with a .
Annotations can be deleted in bulk by doing a DELETE to the annotations endpoint. Annotations can also be deleted in bulk by deleting the associated report. Individual annotations can only be deleted if an was provided when it was created. to delete, do a DELETE to the individual annotation endpoint (for REST callers) or call the method on the with a .
Here are some important things to remember when building an integration that uses code insights
As with build status, only reports and annotations from the latest commit on the source branch of a pull request are displayed. This means that every time the pull request is updated, a new report needs to be created and sent to Bitbucket. This ensures that reports and annotations accurately reflect the current state of the branch and avoids showing stale or inaccurate data.
Annotations that are on lines that have not been changed as part of the pull request will not be displayed.
Annotations will not be shown on the side-by-side diff, commit-level diff or iterative review diff.
Files that are not human-readable (like binary files) will display all annotations associated with the file as a file-level annotation since changed lines are not able to be determined. Files that are displayed in a special format but are still human readable (e.g. SVG files are displayed as pictures) will only display annotations that are on changed lines, but since there are no lines to attach them to in the UI they will be displayed alongside file-level annotations.
Static analysis is done on the source branch, but the diff Bitbucket displays is actually different. When using git on the command line to compare branches, most often one will use the ‘common ancestor’ diff (also known as the ‘merge-base diff’) which shows the diff from the point where the source branch diverged from the target branch. However, Bitbucket Server shows the ‘effective diff’ (or ‘preview-merge diff’) which shows the changes as they would look after the pull request is merged. This means that the files being displayed in a pull request diff will contain changes from the target branch also (although only changes made on the source branch will be highlighted).
This enables reviewers to get a better picture of what the code will look like after the pull request is merged and how the changes interact with the changes on the target branch.
The consequence of this is that, for files that have been changed by the target branch, Bitbucket cannot accurately place annotations because the line that was annotated on the source commit may moved to a different line in the effective diff that is being displayed due to the changes that were made by the target branch. In this case we show the annotations in a separate dialog, not on the diff.
Posting a failed report will not stop a pull request from being merged. There are build status settings that can be set up for a repository or project that will block the merge so it is recommended to also post a failed build status if wanting a report to affect the mergability of a pull request.
Rate this page: