This guide will help you keep your Forge app costs low — or avoid charges altogether — by showing you practical patterns that reduce billable usage without sacrificing functionality.
Forge's consumption-based pricing includes generous free monthly allowances, and many apps will never exceed them. But as your app scales or handles more events, function invocations, storage operations, and logging can push usage past the free tier. The techniques in this guide help you stay within those allowances — or minimise overage when you do exceed them.
Forge uses a consumption-based pricing model with generous free monthly allowances per app. Charges only apply when you exceed these thresholds. Understanding the billable capabilities helps you know where to focus your optimisation efforts.
See Forge platform pricing for the authoritative and up-to-date allowances and prices. A summary is included below for reference:
| Capability | Unit | Free usage allowance (monthly) | Overage price per unit ($USD) |
|---|---|---|---|
| Forge Functions: Duration | $/GB-seconds | 100,000 GB-seconds | 0.000025 |
| Key-Value Store: Reads | $/GB | 0.1 GB | 0.055 |
| Key-Value Store: Writes | $/GB | 0.1 GB | 1.090 |
| Logs: Writes | $/GB | 1 GB | 1.005 |
| SQL: Compute duration | $/hr | 1 hr | 0.143 |
| SQL: Compute requests | $/1M-requests | 100,000 requests | 1.929 |
| SQL: Data stored | $/GB-hours | 730 GB-hours | 0.00076850 |
Capabilities not listed above — including UI modules, Jira expressions, and Forge Remote — are free and do not contribute to your bill.
console.log() contributes to log write volume. Avoid verbose logging in hot paths; use log levels and strip debug logs from production builds.You can monitor your app's usage and forecast costs in the Developer Console, and use the Forge cost estimator to preview potential charges.
Forge functions are invoked every time your UI needs data or the user performs an action. Moving logic into the browser — where it runs freely — is one of the highest-impact ways to reduce FaaS invocations.
Every call to a Forge function — even a simple one — incurs a FaaS invocation cost. A significant opportunity for cost reduction lies in moving logic that doesn't require server-side execution into your frontend UI component instead.
Forge's UI Kit and Custom UI frontends run entirely in the browser and are not subject to function invocation costs or limits. This makes the frontend an ideal place for:
requestJira() / requestConfluence() — both UI Kit and Custom UI can call Atlassian REST APIs directly from the browser using the Forge bridge, without invoking a backend function at all.Both UI Kit and Custom UI apps can use requestJira() and requestConfluence() to call Atlassian REST APIs directly from the browser. This eliminates the round-trip through a Forge function for read-heavy operations like fetching issue details or page content.
1 2// Both UI Kit and Custom UI — import from @forge/bridge import { requestJira } from '@forge/bridge'; // ✅ Fetch issue data directly from the browser — no function invocation needed async function loadIssue(issueKey) { const response = await requestJira(`/rest/api/3/issue/${issueKey}?fields=summary,status,assignee`); return response.json(); }
Note that API requests from the frontend are always authenticated as the context user. Secure operations, such as accessing Forge storage, can only be accessed via a resolver.
When to keep logic in the backend: Use Forge functions for operations that require secure access to Atlassian APIs, app storage, or secrets. Don't expose sensitive data or business logic to the client unless the context user is authorised to see it.
A common anti-pattern is fetching data inside a component that re-renders frequently. Instead, fetch once and store the result:
1 2// ❌ Anti-pattern: resolver called on every render function MyPanel() { const result = useAction(() => invoke('getConfig')); return <Text>{result.data?.value}</Text>; } // ✅ Better: fetch once, store in state function MyPanel() { const [config, setConfig] = useState(null); useEffect(() => { invoke('getConfig').then(setConfig); }, []); // empty dependency array = runs once on mount return <Text>{config?.value}</Text>; }
References: Forge UI Kit | Custom UI
useProductContext() / view.getContext() to avoid resolver invocationsA surprisingly common anti-pattern is invoking a Forge resolver to look up contextual metadata that is already available in the frontend context — for example, the current issue key, project key, or space key. Both UI Kit and Custom UI provide ways to access this context directly in the browser, with no function invocation required.
In both UI Kit and Custom UI you can use view.getContext() from @forge/bridge:
1 2import { view } from '@forge/bridge'; async function getIssueKey() { const context = await view.getContext(); return context.extension.issue.key; }
Alternatively in UI Kit, you can also use the useProductContext() hook from @forge/react:
1 2import { useProductContext } from '@forge/react'; function MyPanel() { const context = useProductContext(); // context.extension.issue.key, context.extension.project.key, etc. // are all available here — no resolver call needed const issueKey = context?.extension?.issue?.key; return <Text>Issue: {issueKey}</Text>; }
You can use the context values to decide whether to call a resolver (e.g. only fetch data if the issue is in a specific project), or to pass them as arguments to a resolver rather than having the resolver look them up itself.
References: useProductContext hook | view.getContext() (Custom UI)
Scheduled and event-driven triggers are one of the most common sources of unnecessary invocations. The goal is to invoke your function only when there is real work to do.
Scheduled triggers are one of the most common sources of unnecessary compute consumption. Each execution of a scheduled trigger invokes a Forge function — and if the trigger runs frequently but rarely has meaningful work to do, the invocations are wasted.
Before setting a schedule, ask: How often does this job actually need to run? Common mistakes include:
In your manifest.yml, prefer longer intervals wherever possible:
1 2modules: scheduledTrigger: - key: my-scheduled-trigger function: my-function interval: day # Options: fiveMinutes, hour, day, week
If your scheduled trigger is polling for changes (e.g. checking whether issues have been updated), replace it with a product event trigger that fires only when the relevant event actually occurs. This eliminates unnecessary invocations entirely.
1 2modules: trigger: - key: issue-updated-trigger function: my-function events: - avi:jira:updated:issue # fires only when an issue is updated
When implementing map-reduce operations or integrating with external systems, a common pattern is to use a resolver to poll Forge storage until a certain value is updated. This can cause unnecessary compute usage and a lengthy wait for the user. A more efficient alternative is to use Forge Realtime to push events from your backend to your UI. Forge Realtime can also be used to send events between different instances of your UI (e.g. to keep multiple users in sync).
If you must use a scheduled trigger, add a lightweight guard at the start of your function to check whether there is actually work to do before proceeding with expensive operations:
1 2export async function run(event, context) { // Read a lightweight flag from storage before doing expensive work const lastProcessedAt = await storage.get('lastProcessedAt'); const dataChangedAt = await getDataLastChangedTimestamp(); if (dataChangedAt <= lastProcessedAt) { // Nothing has changed since last run — exit early return; } // Only now do the expensive processing await processData(); await storage.set('lastProcessedAt', Date.now()); }
References: Scheduled Trigger | Product Events
When your app listens to product events (such as issue updated, page created, or sprint started), Forge invokes your function for every matching event across all users and content in the installed site. At scale, this can result in a large number of invocations — many of which your function will immediately discard because they don't meet your business criteria.
The key principle is: reject irrelevant events as early and cheaply as possible.
Where available, use manifest-level filtering to prevent Forge from invoking your function at all for events you don't care about. This is the most cost-effective option because the function is never invoked:
1 2modules: trigger: - key: issue-trigger function: handle-issue-event events: - avi:jira:updated:issue filter: # Only invoke the function for issues in a specific project expression: "event.issue.fields.project.key == 'MYPROJ'"
When manifest-level filtering isn't sufficient, add a fast check at the very top of your function handler before any expensive operations:
1 2export async function run(event, context) { const { issue } = event; // Exit early if this isn't an issue type we care about if (issue.fields.issuetype.name !== 'Bug') return; // Exit early if the priority isn't high enough if (!['Critical', 'Blocker'].includes(issue.fields.priority.name)) return; // Only now do expensive work await processHighPriorityBug(issue); }
A common source of wasted invocations is an app that modifies a Jira entity (for example, updating an issue field) and then receives and processes the event that its own update generated. This creates unnecessary work and can even cause feedback loops.
Use the ignoreSelf filter property in your manifest to tell Forge to suppress events generated by your app's own actions:
1 2modules: trigger: - key: issue-updated-trigger function: handle-issue-update events: - avi:jira:updated:issue filter: ignoreSelf: true # Don't invoke this function for updates made by our own app expression: "event.issue.fields.project.key == 'MYPROJ'"
Jira only: ignoreSelf currently only works with Jira events. It is not yet supported for Confluence or other product events.
References: Product Events reference | Trigger manifest reference
If your app periodically polls an external service to check for updates — for example, calling a third-party API every few minutes to see if anything has changed — consider replacing this pattern with a Forge web trigger.
A web trigger gives your app a publicly accessible HTTPS URL. You can register this URL with the external service as a webhook, so it calls you only when something actually changes. This eliminates all the empty polling invocations. Note that there is no flagfall cost or network charges for invoking web triggers, but the web trigger function runtime is still billed as compute.
1 2modules: webtrigger: - key: my-inbound-webhook function: handle-webhook functions: - key: handle-webhook handler: index.handleWebhook
1 2// Your web trigger handler receives the inbound HTTP request export async function handleWebhook(request) { const payload = JSON.parse(request.body); // Process the event from the external service await processExternalEvent(payload); return { statusCode: 200, body: 'OK' }; }
References: Web Trigger | Web Trigger manifest reference
Forge provides built-in storage options that can help you avoid redundant API calls and unnecessary function work. Understanding what's free and what's billed helps you use storage effectively without introducing new costs.
The Forge Storage API provides a hosted key-value store available to all Forge apps. Note that KVS reads and writes are billable above the free tier (0.1 GB free per month for reads and writes respectively — see the pricing table above). That said, KVS storage does not incur function invocation costs, and storing data here can help you avoid redundant API calls to Atlassian products.
Good use cases for Forge Storage include:
1 2import { storage } from '@forge/api'; // Cache an expensive API result for 1 hour const CACHE_TTL_MS = 60 * 60 * 1000; export async function getCachedData(key) { const cached = await storage.get(`cache:${key}`); if (cached && Date.now() - cached.timestamp < CACHE_TTL_MS) { return cached.value; // serve from cache — no extra API call needed } const freshData = await fetchExpensiveData(key); await storage.set(`cache:${key}`, { value: freshData, timestamp: Date.now() }); return freshData; }
Jira and Confluence support entity properties (also called content properties in Confluence) — metadata storage attached directly to issues, projects, users, pages, and other entities. Entity properties are stored by the product itself, so they don't consume Forge Storage quota and are entirely free to use.
Good use cases for entity properties include:
1 2import { asApp, route } from '@forge/api'; // Store metadata against a Jira issue — uses product storage, not Forge storage await asApp().requestJira(route`/rest/api/3/issue/${issueId}/properties/my-app-data`, { method: 'PUT', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ processedAt: Date.now(), status: 'done' }), });
Visibility: Some entity property types are visible to other apps and end-users via the REST API, so they are not suitable for storing sensitive or confidential data. Use Forge Storage (with appropriate access controls) or Forge Remote with external storage for sensitive information.
Size limits: Entity properties have a maximum value size of 32 KB per property. For larger payloads, use Forge Storage or Forge Remote with external storage.
KVS reads and writes are billed by data volume once you exceed the free tier. Avoid patterns that perform dozens of individual storage reads/writes in a loop — batch your reads and writes where possible, and use the Custom Entities API for structured querying rather than iterating over many keys.
References: Storage API | Jira entity properties | Confluence content properties
Some data fetched via asApp() API calls is the same for all users and changes infrequently — for example, custom field IDs, project configurations, field schemes, or workflow statuses. Rather than making a live API call on every function invocation, cache this data in Forge Storage with a time-to-live (TTL) and serve subsequent invocations from the cache.
1 2import { storage } from '@forge/api'; const CACHE_TTL_MS = 60 * 60 * 1000; // 1 hour async function getCustomFieldId(fieldName) { const cacheKey = `field-id:${fieldName}`; const cached = await storage.get(cacheKey); if (cached && Date.now() - cached.cachedAt < CACHE_TTL_MS) { return cached.fieldId; // serve from cache — no API call needed } // Cache miss: fetch from API and store const fields = await fetchAllFields(); const field = fields.find(f => f.name === fieldName); await storage.set(cacheKey, { fieldId: field.id, cachedAt: Date.now() }); return field.id; }
This pattern is particularly effective for data that is read on every invocation but updated rarely — the cache hit rate will be very high, and you'll typically make one API call per hour instead of one per invocation.
Because KVS writes are ~20× more expensive than reads, prefer longer TTLs where the data allows. A 1-hour TTL means 1 write per hour; a 1-minute TTL means 60 writes per hour — at 20× the cost.
References: Storage API
storage.query() instead of iterating over keysWhen working with the Forge Storage Custom Entities API, avoid patterns that fetch all stored items into memory and then filter them in your function code. Instead, use storage.query() to push filtering, sorting, and pagination down to the storage layer — this reduces both the volume of data read (lowering KVS read costs) and the amount of work your function has to do.
1 2import { storage, startsWith } from '@forge/api'; // ❌ Anti-pattern: fetch all items and filter in code const allItems = await storage.query().getMany(); const activeItems = allItems.results.filter(item => item.value.status === 'active'); // ✅ Better: use index-based querying to fetch only what you need // (requires defining an index on the 'status' field in your entity schema) const activeItems = await storage.query() .index('by-status') .where('status', startsWith('active')) .limit(50) .getMany();
Even without custom indexes, always pass a .limit() to bound the maximum number of items returned per call, and use cursor-based pagination rather than loading all pages at once.
References: Custom Entities API
Every API call within a Forge function consumes time and contributes to function duration costs. Fetching only what you need, using bulk endpoints, and pre-filtering at the source are the most effective ways to reduce API-related costs.
One of the most impactful performance and cost optimisations is replacing multiple individual API calls with a single bulk API call. Each API call within a Forge function consumes time (increasing function duration) and contributes to rate limit consumption. Reducing the number of API calls directly reduces the cost and risk of your function.
A classic anti-pattern is the "N+1 problem": fetching a list of items and then making a separate API call for each item to retrieve its details:
1 2// ❌ Anti-pattern: N+1 API calls const issues = await searchIssues('project = MYPROJ'); for (const issue of issues) { // This makes one API call per issue — very expensive! const details = await getIssue(issue.id); await process(details); }
Many Atlassian APIs offer bulk endpoints that can fetch multiple resources in a single request. Use these wherever they exist. For example, when searching for issues, the Jira Search API returns full issue objects with all the fields you request — no need for follow-up calls:
1 2// ✅ Better: fetch all details in one or a few bulk requests const issues = await searchIssues('project = MYPROJ', { fields: ['summary', 'status', 'assignee', 'priority', 'description'], // Request all the fields you need upfront }); // No additional per-issue calls needed — all fields already loaded for (const issue of issues) { await process(issue); }
For search-style APIs that support field selection, always specify exactly the fields your function needs. This reduces response payload size, improving speed and reducing memory usage:
1 2// ❌ Fetches every field — wastes bandwidth and memory const response = await asApp().requestJira(route`/rest/api/3/search?jql=project=MYPROJ`); // ✅ Request only the fields you'll actually use const response = await asApp().requestJira( route`/rest/api/3/search?jql=project=MYPROJ&fields=summary,status,assignee` );
| Use case | Individual approach (avoid) | Bulk approach (prefer) |
|---|---|---|
| Fetch multiple Jira issues | GET /rest/api/3/issue/{issueId} × N | POST /rest/api/3/issue/bulkfetch |
| Update multiple issue fields | PUT /rest/api/3/issue/{issueId} × N | POST /rest/api/3/bulk/issues/fields/edit |
| Fetch user details | GET /rest/api/3/user?accountId=X × N | GET /rest/api/3/user/bulk |
| Search issues with full details | Search then fetch each issue | POST /rest/api/3/search with fields parameter |
| Fetch multiple Confluence pages | GET /wiki/api/v2/pages/{id} × N | GET /wiki/api/v2/pages?id=X&id=Y&... |
When you genuinely need to make multiple independent API calls, use Promise.all to parallelise them rather than awaiting them sequentially. This reduces total function duration without increasing the number of invocations:
Rate limits: Parallelisation is more efficient up to a point, but sending too many concurrent requests can trigger rate limit errors (HTTP 429). Avoid parallelising large numbers of requests at once — send at most 5–10 concurrent requests and batch the rest. See the Jira rate limiting documentation for details.
1 2// ❌ Sequential: duration = sum of all call durations const issueData = await getIssue(issueId); const userData = await getUser(userId); const projectData = await getProject(projectId); // ✅ Parallel: duration ≈ slowest single call const [issueData, userData, projectData] = await Promise.all([ getIssue(issueId), getUser(userId), getProject(projectId), ]);
References: Jira Search API | Jira Bulk Issue Fetch | Confluence Pages API
Bulk operations — such as processing all issues in a project, all pages in a space, or all users in an organisation — can be extremely expensive in terms of function duration and API calls. Without good filtering, a single invocation can iterate over thousands of items unnecessarily, rapidly consuming compute quota.
The key principle is: fetch only the items you actually need to process, not all items and then filter in code. Spreading work across more invocations through batching may make each individual invocation cheaper, but the total compute consumed is the same — the real saving comes from doing less work in the first place.
If your function iterates over a set of items, tests a condition on each, and then only acts on items that meet the condition, consider whether you can express that condition as a JQL query or API filter — so that only the relevant items are returned in the first place:
1 2// ❌ Anti-pattern: fetch all issues, filter in code const allIssues = await searchIssues('project = MYPROJ'); for (const issue of allIssues) { if (issue.fields.status.name === 'Done' && issue.fields.assignee === null) { await processIssue(issue); // only a small subset actually gets processed } } // ✅ Better: express the condition in JQL so only matching issues are returned const targetIssues = await searchIssues( 'project = MYPROJ AND status = Done AND assignee is EMPTY' ); for (const issue of targetIssues) { await processIssue(issue); // every issue here needs processing }
For scheduled jobs that process "all items that need attention", use a timestamp filter to restrict the query to items that have changed since the last run:
1 2export async function run(event, context) { const lastRunAt = await storage.get('lastRunAt') || '2000-01-01'; // Only fetch issues updated since the last run const issues = await searchIssues( `project = MYPROJ AND updated >= "${lastRunAt}" ORDER BY updated ASC` ); for (const issue of issues) { await processIssue(issue); } await storage.set('lastRunAt', new Date().toISOString().split('T')[0]); }
References: Jira Issue Search API | Async Events API | Forge platform limits
maxResults on paginated API callsAtlassian REST APIs that return paginated results will use a default page size when maxResults is not specified — often 50 or 100 items. If your function only needs the first few results, fetching the full default page wastes bandwidth, increases response payload size, and extends function duration (all of which increase compute cost).
Always pass an explicit maxResults limit matched to your actual need:
1 2// ❌ Fetches up to 50 issues by default — wasteful if you only need 5 const response = await asApp().requestJira( route`/rest/api/3/search?jql=project=MYPROJ&orderby=created DESC` ); // ✅ Fetch only as many items as you actually need const response = await asApp().requestJira( route`/rest/api/3/search?jql=project=MYPROJ&orderby=created DESC&maxResults=5` );
Conversely, when you do need all items, use the maximum allowed page size (typically 100 for Jira search) to minimise the number of paginated requests required — fewer requests means shorter function duration.
1 2// ✅ Use maximum page size when fetching all results to minimise round trips const response = await asApp().requestJira( route`/rest/api/3/search?jql=project=MYPROJ&maxResults=100` );
References: Jira Search API | Confluence REST API v2
Once you've reduced the number of invocations, the next lever is reducing the cost of each individual invocation — by tuning memory, reducing logging overhead, and offloading the heaviest work off-platform.
Forge function compute costs are billed in GB-seconds: the amount of memory allocated to your function multiplied by how long it runs. This means memory is a direct cost lever — allocating more memory than your function needs wastes money, but allocating too little can slow execution (ironically increasing duration and therefore cost).
The default memory allocation for Forge functions is 512 MB. You can adjust this per function in your manifest.yml using the memoryMiB property:
1 2functions: - key: my-lightweight-function handler: index.run memoryMiB: 128 # Reduce for simple, low-memory functions - key: my-data-processing-function handler: processor.run memoryMiB: 1024 # Increase if the function processes large payloads
The cost formula is: GB-seconds = (memoryMiB ÷ 1024) × duration in seconds. A function using 512 MB for 1 second costs the same as a function using 256 MB for 2 seconds — so reducing duration is just as valuable as reducing memory.
References: Function manifest reference | Forge platform pricing
Every call to console.log() contributes to billable log write volume. In functions that are invoked frequently — such as product event triggers or resolvers in widely-used apps — even routine debug logging can accumulate significant log volume.
Gate debug-level logging behind an environment variable or a Forge Storage flag, so it's only active when you explicitly need it:
1 2const DEBUG = process.env.DEBUG_LOGGING === 'true'; export async function run(event, context) { if (DEBUG) console.log('Event received:', JSON.stringify(event)); // ... function logic ... if (DEBUG) console.log('Processing complete'); }
In production, restrict logging to errors and meaningful state changes. Avoid logging the full payload of every event or API response — these can be very large and are the primary driver of log volume:
1 2// ❌ Logs the full event payload on every invocation — expensive at scale console.log('Processing event:', JSON.stringify(event)); // ✅ Log only what you need for debugging production issues console.log(`Processing issue ${event.issue?.key} (type: ${event.issue?.fields?.issuetype?.name})`);
References: Forge platform pricing | Logging guidelines
Forge Remote allows your Forge app to delegate processing to an external backend service that you operate — such as a service running on AWS, GCP, Azure, or any HTTPS-accessible endpoint. This fundamentally changes the cost model: instead of consuming Forge FaaS invocations for compute-intensive work, that work is handled by your own infrastructure.
With Forge Remote, your manifest.yml declares a remote backend endpoint. Forge routes calls from your resolver to that endpoint, passing the Forge-managed auth token so your backend can verify the request:
1 2remotes: - key: my-remote-backend baseUrl: https://my-backend.example.com modules: jira:issuePanel: - key: my-panel resource: main resolver: function: remote-resolver endpoint: my-remote-backend
Your remote backend receives the invocation and handles it — your Forge function is not executed at all for these calls, saving FaaS invocations.
Forge Remote also enables you to use your own databases, blob stores, or caches as a storage backend. This is particularly valuable for:
Think carefully before adopting Forge Remote. Using a remote backend means you become wholly responsible for operating, securing, scaling, and maintaining that infrastructure — including patching, uptime, and incident response. It also makes your app ineligible for "Runs on Atlassian" status, which may be important if you plan to distribute it on the Atlassian Marketplace.
For most teams building in-house tools or Marketplace apps, keeping compute and storage workloads on-platform is by far the simpler option. Forge Remote is best suited to cases where you have an existing backend you need to integrate with, or requirements that genuinely cannot be met by the on-platform capabilities.
References: Forge Remote
We hope this guide has provided some useful tips for optimising your Forge app costs, and potentially also improving performance. But please take these tips with a grain of salt. As Donald Knuth said in 1974:
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimisation is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
This sage advice also applies to Forge apps.
Forge has a generous free tier. The vast majority of private apps developed by customers for use in a single site — even written naively without special regard for cost — will never exceed it. Some of the practices above are almost universally beneficial: filtering events, using bulk APIs, and writing effective queries will typically simplify app logic while also making it more efficient. Other patterns such as introducing caches, stripping log statements, and offloading compute and storage to Forge Remote introduce real complexity and should only be adopted when profiling shows a clear benefit.
Before making any changes, we recommend profiling your app and making careful observations of your logs and usage metrics to determine which areas would benefit from optimisation, and whether the complexity trade-off is worthwhile.
Happy forging!
Rate this page: