This article provides details about rate limiting in Confluence to help you anticipate rate limiting and manage how your app responds.
Confluence limits the rate of REST API requests to ensure that services are reliable and responsive for customers.
These rate limts are implemented as a set of rules that consider the number of threads handling certain kinds of requests, the cost of the requests, and the resources required by the requests.
Each rule applies to a unique combination of resources such as nodes, tenants, database hosts, endpoints, or compute resources. Rules are evaluated in a sequence designed to maximize computational efficiency.
Each rule has a rate threshold and processing request counter. Request processing above the threshold is blocked, including downstream processing. Cost calculations that involve other rules can also figure into rate limiting as different requests can require different types and amounts of resources.
All requests migrating data from server to cloud must include the Migration-App
header with value true
. These requests are subject to two additional migration-specific rate limits:
For more information about these rate limits, see the App migration platform documentation.
REST API rate limits factor in both quota and burst based limits.
Quota and burst rate limiting is implemented as a set of rules that consider the app sending the request, the tenant receiving that request, the number of requests, the edition of the product being queried, and its number of users.
The rules are applied independently to burst (10 seconds) and quota (1 hour) periods to determine an appropriate maximum amount of requests. Request processing above the threshold is blocked, including downstream processing.
We will begin enforcing rate limits for all free apps on or after August 18, 2025. However, please note that in circumstances where apps are highly impacting the stability of our platform, we reserve the right to enforce the limits at an earlier date.
Apps can detect rate limits by checking if the HTTP response status code is 429
. Any REST API can return a rate limit response.
429
responses may be accompanied by the Retry-After
header. This header indicates how many seconds the app must wait before reissuing the request. If you reissue the request before the retry period expires, the request will fail and return the same or longer Retry-After
period.
Some transient 5XX
errors are accompanied by a Retry-After
header. For example, a 503
response may be returned when a resource limit is reached. While these are not rate limit responses, they can be handled with similar logic as outlined below.
Other header responses include:
X-RateLimit-Reset
: This header provides the timestamp (in ISO 8601 format) when the rate limit will reset, calculated as the request timestamp plus the Retry-After
period.X-RateLimit-Limit
: This header specifies the maximum number of requests that a user can make within a specific time window.X-RateLimit-Remaining
: This header shows the number of requests remaining in the current rate limit window before the limit is reached.You can retry a failed request if all of these conditions are met:
Retry-After
header or 429
status).Apps should treat 429
responses as a signal to alleviate pressure on an endpoint and retry the request only after a delay. The best practice is to double the delay after each successive 429
response from a given endpoint. Backoff delays only need to exponentially increase to a maximum value at which point the retries can continue with the fixed maximum delay. You should also apply jitter to the delays to avoid the thundering herd problem.
The following articles provide useful insights and techniques related to retry and backoff processing:
There are several considerations that govern the rate limit response handling:
These parameters are influenced by higher-level considerations, such as:
The following pseudo code illustrates the recommended response processing logic:
1 2// Defaults may vary based on the app use case and APIs being called. let maxRetries = 4; // Should be 0 to disable (e.g. API is not idempotent) let lastRetryDelayMillis = 5000; let maxRetryDelayMillis = 30000; let jitterMultiplierRange = [0.7, 1.3]; // Re-entrant logic to send a request and process the response... let response = await fetch(...); if (response is OK) { handleSuccess(...); } else { let retryDelayMillis = -1; if (hasHeader('Retry-After') { retryDelayMillis = 1000 * headerValue('Retry-After'); } else if (statusCode == 429) { retryDelayMillis = min(2 * lastRetryDelayMillis, maxRetryDelayMillis); } if (retryDelayMillis > 0 && retryCount < maxRetries) { retryDelayMillis += retryDelayMillis * randomInRange(jitterMultiplierRange); delay(retryDelayMillis); retryCount++; retryRequest(...); } else { handleFailure(...); } }
Some apps may invoke the REST API concurrently via the use of multiple threads and/or multiple execution nodes. When this is the case, developers may choose to share rate limit responses between threads and/or execution nodes such that API requests take into account rate limiting that may have occurred in other execution contexts. Distributing rate limit response data will be non-trivial, so an alternate strategy involves backing off more quickly and/or increasing the maximum number of retries. This second strategy may result in poorer performance and may need tuning if the characteristics of the app changes.
These are some strategies to spread out requests and thereby lower the peaks.
A high level of concurrency in apps may slow the performance of Confluence, causing a less responsive user experience. Significant levels of concurrency will also result in a greater chance of rate limiting.
If your app makes many similar requests in a short amount of time, coordination of backoff processing may be necessary.
Although multi-threaded apps may see greater throughput for a short period, you should not attempt to use concurrency to circumvent rate limiting.
When performing scheduled tasks, apply jitter to requests to avoid the thundering herd problem. For example, try to avoid performing tasks “on the hour.” This approach can be applied to many types of actions, such as sending daily email digests.
When you need to perform a large amount of ad-hoc processing, such as when migrating data, you should anticipate and account for rate limiting. For example, if the API calls are directed to a single tenant, it may be possible to schedule the activity at night or on a weekend to minimize customer impact while maximizing throughput.
There are several "bulk" operations that consolidate requests. For example, Get multiple users using ids.
Many operations also enable queries to be consolidated by specifying expand query parameters.
Confluence provides a range of context parameters that can help minimize the number of API requests necessary. Also, note that conditions can be sent as context parameters.
As rate limiting is based on concurrency and cost, minimizing the amount of data requested will yield benefits. An obvious strategy to start with is caching. You can also save resources by specifying which fields or properties to return when using operations such as Search for issues using JQL (GET). Similarly, only request the data you need when using expand query parameters. You can use pagination in your requests to limit the number of matches required. Using webhooks to subscribe to data updates can also lower your data request volume, thereby lowering the risk of rate limiting.
Do not perform rate limit testing against Atlassian cloud tenants because this will place load on Atlassian servers and may impact customers.
The Acceptable Use Policy identifies your obligations to avoid overwhelming Atlassian infrastructure.
The following Jira issues capture known limitation and enhancements relating to rate limiting:
Question | Answer |
---|---|
Can we expect that these rate limits will be adjusted over time by Atlassian? | Yes. In order to continuously ensure that services are reliable and responsive for our shared customers, we reserve the right to adjust these limits at any time and will keep you apprised of any changes. |
Will my app be impacted by rate limits? | We will not begin hard enforcement for all free apps until on or after August 18, 2025. However, in some circumstances where apps are highly impacting the stability of our platform, we reserve the right to enforce the limits at an earlier date. We will notify your listed account contact directly via email if impacted. Please monitor header responses to see where you are at with regard to limits. |
Will this impact Jira apps as well? What about paid apps? | Yes, you can view the Jira API rate limit documentation here. Additionally, we are planning to bring clarity to rate limits across our platform infrastructure over the next year, including paid apps. |
How can I tell if I’m getting close to the limits? | For now, please monitor header responses to see where you are at with regards to limits:
|
Will this result in breaking changes? | No. It's important to note that these changes do not break the impacted APIs; instead, it reduces the request limit which means the app may subsequently need to reduce crawl and refresh rates. Please ensure your app is able to handle the header responses as required. |
What error message will customers receive? | Customers will see whichever error message you already have in place to handle API status code 429 as per the API documentation. If you do not have this in place, we recommend the following message:
|
What can I do to reduce API calls? | Please review the API hygiene information above under Lowering the request cost. |
Will customers be notified of this change? | Not at this time. While customers may be impacted, partners will have the capability to address customer inquiries more efficiently and directly on a case-by-case basis. |
Who can I contact if I still have additional questions or need support? | For additional support, please submit a ticket. |
Rate this page: