Forge apps run in a multi-tenant environment where the same runtime process can serve multiple Atlassian customers (tenants). This means module-level variables and in-memory caches are shared across tenant invocations unless you explicitly scope data to a single invocation.
This guide explains the risk, shows unsafe and safe code patterns, and provides actionable checklists for auditing your app.
Storing tenant-specific data in module-level (global) variables is one of the most common causes of cross-tenant data leaks in Forge apps. Data written during one tenant's invocation may be visible to a subsequent invocation for a different tenant if it shares the same process.
The Forge runtime is built on AWS Lambda. Lambda optimizes performance by reusing warm execution environments (process containers) across multiple invocations. When a warm environment is reused:
This is a standard characteristic of serverless runtimes, not a Forge-specific bug. However, it conflicts with the intuitive assumption that each function call starts with a clean slate.
The Shared responsibility model requires developers to keep tenant data isolated. Atlassian ensures that tenant A cannot call tenant B's app, but you are responsible for ensuring that in-memory state from one invocation does not leak into another.
The following patterns all share the same root cause: mutable state is declared at module scope and updated during a handler invocation. Because Forge reuses warm execution environments, that state survives into the next invocation, which may belong to a completely different tenant.
The following pattern is common in Node.js but is unsafe in Forge because
cache persists across invocations that may belong to different tenants.
1 2// UNSAFE: module-level cache shared across all tenants const cache = {}; export async function handler(context) { if (cache['currentUser']) { // This could return another tenant's user data! return cache['currentUser']; } const response = await api.asUser().requestJira('/rest/api/3/myself'); const user = await response.json(); cache['currentUser'] = user; // Leaks to the next invocation return user; }
Why it's unsafe: If this warm process is reused for a different tenant's
invocation, cache['currentUser'] still holds the previous tenant's user
object.
1 2// UNSAFE: shared object that accumulates data from multiple tenants const issueCache = {}; export async function onIssueCreated({ payload, context }) { // Issue keys are NOT globally unique - the same key can exist // in different tenants' Jira instances issueCache[payload.issue.key] = payload.issue; }
Each pattern below eliminates the risk in a different way: avoiding module-level state entirely, scoping it to a tenant identifier, or delegating persistence to Forge Storage, which is tenant-scoped by design.
The safest approach is to never store tenant data in module-level variables. Compute or fetch everything within the handler function itself.
1 2// SAFE: all state is local to the function invocation export async function handler(context) { // Fetch fresh data every invocation - no cross-tenant risk const response = await api.asUser().requestJira('/rest/api/3/myself'); const user = await response.json(); return user; }
If you need in-process caching for performance, always key the cache by a
tenant-specific identifier, such as cloudId. This ensures that a warm
process reused for tenant B will only find tenant B's data.
1 2// SAFE: cache partitioned by cloudId (tenant identifier) const cache = {}; export async function handler(context) { const { cloudId } = context; if (cache[cloudId]?.currentUser) { return cache[cloudId].currentUser; } const response = await api.asUser().requestJira('/rest/api/3/myself'); const user = await response.json(); // Scope the cached value to this specific tenant if (!cache[cloudId]) { cache[cloudId] = {}; } cache[cloudId].currentUser = user; return user; }
Even with tenant-partitioned caches, cached data may become stale across invocations. Consider adding a time-to-live (TTL) check or using Forge Storage for durable, cross-invocation data that is automatically scoped per installation.
Forge hosted storage capabilities ([Key-Value Store]{/platform/forge/runtime-reference/storage-api-basic/}, Custom Entity Store, SQL) is automatically scoped per app installation, which means it is inherently tenant-safe. Prefer storage over in-memory caches when you need data to persist across invocations.
1 2import { kvs } from '@forge/kvs'; // SAFE: Forge Storage is scoped per installation (per tenant) export async function handler(context) { const cachedUser = await kvs.get('currentUser'); if (cachedUser) { return cachedUser; } const response = await api.asUser().requestJira('/rest/api/3/myself'); const user = await response.json(); await kvs.set('currentUser', user); return user; }
Module-level state that is read-only and not tenant-specific is safe.
1 2// SAFE: read-only configuration constant - not tenant data const MAX_RETRIES = 3; const API_VERSION = 'v3'; export async function handler(context) { // MAX_RETRIES and API_VERSION are the same for all tenants // and are never modified }
Use this checklist to review your Forge app for data isolation risks:
cloudId or installationId.Forge does not currently provide built-in lint rules to detect unsafe global state. However, you can configure your own ESLint rules to flag patterns that are commonly associated with cross-tenant data leaks:
let or var declarations at module scope.module-level const objects (e.g., cache[key] = value).global or globalThis.Consider adding the
no-restricted-syntax
ESLint rule or a custom plugin to catch these patterns during development.
Rate this page: