Last updated Jul 2, 2024

Forge Cache is now available as part of Forge Early Access Program (EAP). To start testing this feature, sign up using this form.

Forge Cache is an experimental feature offered to selected users for testing and feedback purposes. This feature is unsupported and subject to change without notice. Do not use Forge Cache in apps that handle sensitive information and customer data.

For more details, see Forge EAP, Preview, and GA.

Monitor cache metrics (EAP)

Cache metrics help identify issues with and optimize the performance of cache operations being used by apps. This helps ensure apps are delivering the expected results. When monitoring cache metrics, we recommend using filters to refine the results.

To view cache metrics:

  1. Access the developer console.
  2. Select the Forge app that you want to view metrics for.
  3. Select Metrics in the left menu.
  4. Select Cache in the left menu.

The image below shows cache metrics, as well as all sites that your Forge app is currently installed on, and where there has been at least one invocation in the last 14 days. If there hasn't been any invocation or if the app isn't using any cache operations, the charts won't show any data.

Metrics screen

To view cache metrics on the developer console, make sure to redeploy your app with the latest version of Forge CLI by running forge deploy in your terminal.

Cache metrics

The following metrics are available for monitoring in the developer console:

Cache hit rate

Cache hit rate, or cache hit ratio, measures the cache's efficiency in temporarily storing commonly accessed data. It represents the percentage of data requests that the cache fulfills directly, eliminating the need to retrieve them from the original server. This metric is applicable for get, getAndSet, and delete operations only.

Cache hit

A cache hit occurs when a request for data is satisfied by the cache, rather than having to be retrieved from the origin server. This means that the data is already stored in the cache and can be quickly and efficiently served to the user.

Cache miss

A cache miss occurs when the cache fails to fulfill a data request, which then requires the data to be retrieved from the origin server. This can happen if data is not stored in the cache, or if the cache is full and data is evicted to make room for newer data. Cache misses can be slower and less efficient than cache hits, as they require data to be retrieved from the origin server.

In general, a high cache rate is desirable because it demonstrates that the cache is effectively storing and serving frequently accessed data. A low cache hit rate may indicate that the cache is used ineffectively or it's too small to store all frequently accessed data.

Defining a 'good' cache hit rate lacks a definitive answer, as it depends on the cache's type and size, content popularity, and other factors. Generally, a cache hit rate of 80-95% is considered good, but this can vary depending on the situation.

Cache status codes

HTTP response status codes are indicators of whether or not a specific HTTP request has been successfully completed. When monitoring cache performance, you can scan the volume of the most frequent responses for each status code. The data resolution of each chart depends on the time interval you've selected.

You can see a summary of the following status codes in the developer console:

Status codeDescription
2xx - Success
  • Indicates client requests that are successfully received, understood, and processed by the server.
  • The chart shows the total volume of successful cache responses against the selected time interval.
4xx - Client errors
  • Indicates that there's an issue with the client's request, such as invalid credentials or a storage quota exceeded for the cache operation. These issues must be fixed on the client's side before retrying the request.
  • The chart shows a breakdown of the volume of the most frequent client error responses against the selected time interval.
5xx - Server errors
  • Indicates that the server is experiencing errors or is unable to fulfill a valid request. These issues must be fixed on the server's side before retrying the request.
  • The chart shows a breakdown of the volume of the most frequent server error responses against the selected time interval.

Cache response time

Cache response time is the total amount of time that it takes for a cache operation to receive a request, process the request, and send a response back to the client. Response time starts as soon as the client initiates the request and ends as soon as the client receives a response from the server.

Percentiles are often used when measuring cache response time. Percentiles provide a different view of your cache performance data.

When monitoring cache response time, you can see a summary of the following percentiles involving the response times of all cache operations being peformed by your Forge app:

PercentileDescription
P50 - Median
  • Indicates the value of the response time that's faster or equal to 50% of all cache responses.
  • This is the typical performance of your cache and is not skewed by extreme values.
P95 - 95th percentile
  • Indicates the value of the response time that's faster or equal to 95% of all cache responses.
  • If the P95 value is 170 ms, this means that the cache response times of 95% of the requests your app receives is less than or equal to 170 ms.
  • This helps give an understanding of what the slowest 5% of users may be experiencing with their response times.
P99 - 99th percentile
  • Indicates the value of the response time that's faster or equal to 99% of all cache responses.
  • If the P99 value is 170 ms, this means that the cache response times of 99% of the requests your app receives is less than or equal to 170 ms.
  • This helps give an understanding of what the slowest 1% of users may be experiencing with their response times.

You can also scan the latency of the 50th percentile, 95th percentile, and 99th percentile response times of cache operations in the response time chart. The data resolution of the chart depends on the time interval you've selected.

Filters

Use these filters to refine your metrics:

  • Environment: Narrows down the metrics for a specific app environment for your app.

  • Date: Narrows down the metrics based on your chosen time interval. Choose from a range of predefined values, such as the Last 24 hours, or choose a more specific time interval using the Custom option.

  • Sites: Narrows down the metrics based on the sites that your app is installed onto, for example, <your-site>.atlassian.net. You can select multiple sites.

  • Cache operation: Narrows down the metrics based on the different cache operations used. All supported cache operations can be found here.

Your filter selections persist across different metrics. If you switch from one metric page to another, your chosen filters will remain active.

  • Metrics are only shown for sites with at least one invocation in the past 14 days.
  • All dates are in Coordinated Universal Time (UTC).
  • Each chart's data resolution depends on the time interval you've selected. For example 'Last 24 hours' shows data at a 30-minute resolution, and 'Last hour' shows data at a 1-minute resolution.
  • Metrics may not always be accurate because undelivered metrics data isn’t back-filled and data sampling might be used for some metrics.

You can also bookmark the URL on your browser to access metrics based on specific filtering criteria for quick access. This is useful for repeated checks of the same metrics, saving time and effort in reapplying preferred filters.

You must use data in accordance with the privacy rights that you've obtained from your user. For more information, see the Atlassian Developer Terms and Forge Terms.

Rate this page: