This page contains announcements and updates for developers from various products, platforms, and programs across Atlassian. It includes filter controls to make it easier to only see updates relevant to you.
To ensure you don’t miss any updates, we also provide RSS feeds. These feeds will take on any filters you applied to the page, and are a standardized way of keeping up-to-date with Atlassian changes for developers. For example, in Slack with the RSS app installed, you can type /feed <FEED URL> in any channel, and RSS updates will appear in that channel as they are posted.
We are upgrading from jQuery 3 to 4 in Jira 12, Confluence 11, Bitbucket 11, Bamboo 13, and Crowd 8. jQuery migrate will also be removed. Much frontend code depends on jQuery and we expect this will require upgrade work in apps (e.g. P2 plugins) and custom integrations with a frontend.
We consider this a breaking change and thus do not plan to backport to existing LTS releases.
This is to continue to meet customers' demands for secure & compliant products. We must be proactive in this upgrade because of how much our products and apps depend on it. We had many requests in the past to upgrade from jQuery 2 to 3, especially when there were vulnerabilities found in jQuery.
As we learn more and refine developer tooling to assist with the work, we will update the developer documentation
Many of the changes can be prepared for in a way that's backwards compatible with v3 so as many (compatible) changes as possible will also be backported to the LTS versions of Jira (11.3), Confluence (10.2), Bamboo (12.1) and to the latest versions of Bitbucket 10 and Crowd 7. The intention is to make it easier to test your app without having to worry about unrelated breaking changes.
Similarly, AUI 10.1 adds support for jQuery 4.
We will continue to provide jQuery web-resources and we ask developers to use them so that if needed we can roll out security patches as quickly as possible.
We are still early on in upgrading the Data Center products themselves. Future EAP versions will come with jQuery 4, but this might not arrive in the first few versions.
We have not currently noticed any JS behaviour nor signature changes that would affect apps. Please let us know if you spot something, and we will document it.
See the developer community announcement topic for more information and to leave feedback
Forge Feature Flags is now available through the Early Access Program. To join the EAP, please complete this sign-up form.
This native feature flagging solution allows you to:
Test new features with select customers before a full rollout and collect early feedback
Quickly fix bugs for specific sites or customer groups
Gradually release features using percentage-based rollouts
Gather targeted feedback and iterate rapidly
Optimize app costs by controlling when features consume Forge resources
Forge Feature Flags includes server-side (@forge/feature-flags-node), allowing you to evaluate feature flags in Forge Node Runtime based applications.
For implementation details and examples, refer to the documentation here.
The Forge platform will be undergoing maintenance in FedRAMP production on Feb 14, 2026 between 6-7am UTC.
There should be a few minutes of downtime within this window. During this time, the following capabilities will not be intermittently available:
Creating, updating, or deleting apps
Deploying apps
Installing, uninstalling, upgrading apps
App invocations will continue to work for existing users of the apps. However, new customers may be unable to use apps as consent process will be impacted during this interval as well.
As part of end of support for Connect app, we will be deprecating addon linkers APIs.
On May 7, 2026 we will be removing the following endpoints:
GET /2.0/addon/linkers
GET /2.0/addon/linkers/{linker_key}
GET /2.0/addon/linkers/{linker_key}/values
PUT /2.0/addon/linkers/{linker_key}/values
POST /2.0/addon/linkers/{linker_key}/values
DEL /2.0/addon/linkers/{linker_key}/values
GET /2.0/addon/linkers/{linker_key}/values/{value_id}
DEL /2.0/addon/linkers/{linker_key}/values/{value_id}
The KVS and Custom Entity Store now support a new SetOptions type for data write requests. This lets you:
Change the write conflict strategy
Add specific metadata to the response
Add an expiry or Time-to-live (TTL) to the stored data
You can use SetOptions for the following methods:
The following methods also support SetOptions, but only to set a TTL:
We’re enforcing a set of OAuth and token-authentication changes for Bitbucket Cloud on May 4th, 2026. These updates improve security, align more closely with OAuth standards, and support long-term performance and scalability.
If your integration relies on any of the listed behaviours being deprecated, please update it before May 4th 2026. After the cutoff, requests using deprecated authentication patterns will no longer be accepted.
Changes to the client credentials grant flow
Client credentials grants will no longer issue refresh tokens; existing refresh tokens from this flow will expire or no longer be returned
Client credentials access from OAuth consumers owned by personal workspaces will only have access to data residing in the owning workspace.
Client credentials grants will authenticate as an app_user.
Changes to refresh tokens grant flow
Consumers must support rotating refresh tokens; each use of a refresh token will generate a new refresh token.
Unused refresh tokens will expire after 3 months, requiring full 3LO re-authorization.
Other changes to OAuth 2.0
OAuth token response payloads will return “scope" instead of “scopes".
OAuth access tokens can no longer be provided via query parameters or POST body. They must be sent exclusively in the Authorization header as a Bearer token.
All token based authenticated requests must be directed to https://api.bitbucket.org
Currently, Bitbucket issues a refresh token with the client credentials grant flow. Per https://datatracker.ietf.org/doc/html/rfc6749#section-4.4.3 , refresh tokens should not be included. Issuing refresh tokens introduces security risks. They can be compromised and misused, while client credentials are intended for non-interactive, server-to-server use without long-lived tokens. To strengthen security and align with best practices, we will stop issuing refresh tokens for the client credentials grant flow. Any previously issued refresh tokens from this flow will expire, along with refresh tokens no longer being returned in the client credentials access token response on May 4th, 2026.
Stop expecting or using refresh tokens. Instead, re-authenticate directly using the client credentials grant flow as needed, it's simpler, more secure, and aligns with the intended use case
Currently, OAuth consumers owned by personal workspaces authenticate as the owning user when using the client_credentials grant, without permission restrictions or data filtering. This differs from team-based workspaces, where client_credentials access is already limited to data within the owning workspace. To improve security and ensure consistent behavior across workspace types, this will change.
Starting May 4th, 2026, client_credentials grants for OAuth consumers owned by personal workspaces will no longer authenticate as the owning user and will be restricted to accessing data only within the owning workspace.
Move to use an API token if you wish to access data outside of the owning workspace.
Previously, OAuth consumers owned by team-based workspaces using the client_credentials grant authenticated as a team user, with access limited to the owning workspace. However, OAuth consumers owned by personal workspaces using the client_credentials authenticate as the user who’s personal workspace it is.
After the cutoff date, client_credentials grants will authenticate as a dedicated app_user. Each app_user is unique per OAuth consumer and has the permissions to interact with workspace owning the OAuth consumer.
Actions performed (via our API) using these access tokens will be attributed to the app_user as the author, with the OAuth consumer’s name.
None
When refreshing an access token, a new refresh token is issued (replacing the old one). This standard security feature limits the lifespan of any single refresh token, reducing the window for compromise if a token is exposed.
When going through the refresh_token grant flow, always store the newly generated refresh token for future use. Once used once, the refresh token will expire after a short grace period.
Any attempt to make an access token via the refresh token grant flow using an expired refresh token will return a 400.
Refresh tokens that are not used within three months will expire, requiring users to complete the full three-legged OAuth flow again. This enhances security by preventing indefinite token validity and helps avoid large-scale token migrations in our systems.
None. Any attempt to make an access token via the refresh token grant flow using an expired refresh token will return a 400.
Our current access token response body use the key "scopes" for the list of granted permission scopes. The OAuth 2.0 specification (https://datatracker.ietf.org/doc/html/rfc6749#section-5.1 ) specifies that the key for this data should be named "scope". This is a minor fix, but one that aligns us with the spec for compliance and improved compatibility with standard OAuth libraries.
This new key “scope” will be start being returned along with the existing ”scopes" property from Feb 2, 2026 - as new properties are classified a non-breaking changes. Then, starting May 4th, 2026, we will return only the new key “scope".
Update parsing logic, if applicable, to expect "scope" instead of "scopes".
You can currently pass the access token via the access_token query parameter (in the URL) or as a POST form parameter named access_token . This is strongly discouraged in https://datatracker.ietf.org/doc/html/rfc6750#section-2.2due to security vulnerabilities: URLs are frequently logged in server access logs, browser histories, and proxies, increasing the risk of token exposure and theft.
To reduce these risks and promote secure practices, we are deprecating this mechanism. OAuth access tokens must now be sent exclusively in the Authorization header as a bearer token, as per https://datatracker.ietf.org/doc/html/rfc6750#section-2.1.
Ensure that you’re using authenticating with Bitbucket’s API using OAuth access tokens as bearer in the Authentication header.
Currently, some token-based authentication (this includes API tokens, app passwords, repository/project/workspace access tokens, and OAuth 2.0) can work against multiple endpoints like http://bitbucket.org, api.bitbucket.org, and https://bitbucket.org/api .
To improve performance, scalability, and consistency, we'll require all such requests to be directed exclusively to https://api.bitbucket.org after the cutoff. This consolidation helps optimize the infrastructure security controls.
Ensure that all requests are issued to the api.bitbucket.org sub-domain.
You can now build Forge apps that connect to and extend multiple Atlassian apps with a single installation. At launch, you can build apps that connect to Jira, Jira Service Management, Confluence, and Compass. We are looking to add support for other apps and platform surfaces in the future.
This new architecture enables apps to access data from and extend the UI of multiple Atlassian apps, unlocking new use cases and simplifying app management for admins. These apps are available on the Atlassian Marketplace and support unified installation and management.
To learn more and start building, see:
App Installation
App Management
Multiple-app compatibility makes Forge apps a core part of how customers orchestrate work across Atlassian apps.
Partner benefits:
New revenue opportunities: Multiple-app compatibility becomes a new Marketplace value driver.
Broader reach with less overhead: Build and operate one Forge app that connects to multiple Atlassian apps at once.
Extend existing offerings across multiple apps and surfaces without building and maintaining separate apps.
Customer benefits:
Richer, more interconnected experiences: Apps can show up wherever work happens, instead of being tied to a single host Atlassian app.
Less fragmentation: One app can span Jira, Jira Service Management, Confluence, and Compass, reducing duplicate configuration and vendor sprawl.
Better governance: Admins can view and manage these apps centrally (from Connected apps in Atlassian Administration) with clearer install and management flows.
The following models have been removed and are no longer supported by Forge LLMs:
claude-3-5-haiku-20241022
claude-3-7-sonnet-20250219
claude-opus-4-20250514
To check which models are currently supported, use the list function in the @forge/llm SDK. This function lets you filter models by their status.
Forge LLMs remain in Early Access (EAP). Due to high demand, participation is limited. To request access, join the waitlist here.
Rollout : progressive rollout by tenant. IN PROGRESS
We've updated the behavior of the Delete work type scheme API. Previously, you could delete work type schemes even when they were associated with projects. This is no longer allowed.
What's changing
Work type schemes that are associated with one or more projects can no longer be deleted
The Delete work type scheme API will return a validation error if you attempt to delete a scheme that is associated with projects
What you need to do
If you need to delete a work type scheme that is currently associated with projects, you must first reassign all projects to a different scheme:
Use the Get work type scheme API with the projects expand and id query parameter to get a list of projects associated with the scheme you want to delete
Use the Assign work type scheme to a project API to reassign all associated projects to another work type scheme
Once all projects have been reassigned, you can delete the unused scheme using the Delete work type scheme API
This change is not backward-compatible. Integrations that attempt to delete work type schemes associated with projects will now receive validation errors and must be updated to follow the migration steps above.
For more information, see:
Work type schemes in the Jira Platform REST API documentation
Community post: Project fields association improvements for additional context
Related: CHANGE-2527 (deprecation notice)
Support for Node.js 24 is now available in @forge/cli from version 12.14.0.
To upgrade, run: npm install -g @forge/cli@latest
See the Forge documentation for setup instructions.
To reflect that Forge SQL & KVS Migrations are now suitable for use in production, the features have moved from Early Access to Preview status. Learn more at https://developer.atlassian.com/platform/app-migration/forge-storage/data-planes/
We're introducing new Beta rate-limit headers on Jira and Confluence REST APIs for points-based quota limits. These headers follow a unified, structured model aligned with standards on rate-limiting headers. They are informational only, they do not trigger enforcement or throttling. They are additive, and existing X-RateLimit-* headers continue to be returned.
Beta-RateLimit-Policy – policy definition
A static header that describes the rate-limit policy applied to the request.
Example: Beta-RateLimit-Policy: "global-app-quota";q=65000;w=3600
Beta-RateLimit – per‑response usage
A dynamic response header that provides usage signals for applicable rate-limit policies
Example: Beta-RateLimit: "global-app-quota";r=13000;t=600
When these two headers are returned without the Beta- prefix (RateLimit, RateLimit-Policy), points-based quota limits are actively enforced, and requests may be rate limited. For points-based quota enforcement, only RateLimit and RateLimit-Policy are used , the existing X-Beta-RateLimit-* and X-RateLimit-* headers will not be used. Standard HTTP headers such as Retry-After continue to apply where relevant.
For full details, including policy definitions and usage semantics, see the Jira Rate Limiting documentation here https://developer.atlassian.com/cloud/jira/platform/rate-limiting/ and Confluence Cloud Rate Limiting documentation here https://developer.atlassian.com/cloud/confluence/rate-limiting/
We have increased the workflow history storage period from 28 days to 60 days.
For more information, see the API documentation for List history for workflow and Read specific workflow version.
Additional Notes:
Workflow data from before Oct 30, 2025 remains unavailable as it predates this feature.
We have identified and fixed an issue where the purchaseDetails.discounts array (for discount type EXPERT) was not populated for some negative transaction line items.
Negative transaction lines usually represent credits for unused paid time.
“Unused paid time” refers to the credit a customer receives when they have already paid for a period of service but stop using it before the end of that period. Common example:
The customer upgrades to a higher user tier part‑way through the term. In these cases, the system issues a negative line (refund/credit) to return the unused portion of the original charge.
Sample Transaction: For a given upgrade from 400 → 500 user tier,
Transaction | Line # | Sales Type | Description | List amount | EXPERT discount | Net amount |
|---|---|---|---|---|---|---|
IN-test-10001 | 1 | Upgrade | Upgrade Example App 400 → 500 users | $500.00 | $100.00 | $400.00 |
IN-test-10001 | 2 | Refund | Credit for unused paid time on previous 400 tier | -$400.00 | -$80.00 | -$320.00 |
Line 2 is a negative transaction line (a credit) for unused paid time on the old 400‑user tier.
Previously:
Refund lines for unused paid time associated with Solution Partner Transactions, the discount amount was not populated in the EXPERT field on the transactions API.
As a result, partners could see a negative transaction amount (credit issued to the customer) but a zero or null expert discount amount on those same lines, making it difficult to reconcile discount treatment on credits.
This behavior has now been corrected. For all impacted transactions:
The EXPERT field is now correctly populated for unused paid time credit lines where an EXPERT discount was applied.
The EXPERT field will display the discount amount in Negative line (-80$ in the above example)
In total, this change updates ~17,000 transaction lines across Marketplace partners. Partners can check updated transactions using the last_updated field in the transactions API.
https://developer.atlassian.com/platform/marketplace/rest/v2/api-group-reporting/#api-vendors-vendorid-reporting-sales-transactions-get
Rate this page: