Forge LLMs is available through Forge's Early Access Program (EAP). EAP grants selected users early testing access for feedback; APIs and features in EAP are experimental, unsupported, subject to change without notice, and not recommended for production — sign up here to participate.
For more details, see Forge EAP, Preview, and GA.
Forge LLMs lets your Forge app call Atlassian‑hosted large language models (LLMs) to add secure AI features without leaving the Atlassian platform. Apps using this API are badged as Runs on Atlassian, indicating they leverage Atlassian’s security, compliance, and scalability. The API provides optimized, governed access to supported models so you can focus on creating innovative AI experiences while Atlassian handles model integration and infrastructure.
See the LLM module reference for details on the llm module for your
manifest.yml.
Important:
The app retains its Runs on Atlassian eligibility after the module is added.
The llm module is required to enable Forge LLMs. When you add the llm module to an app's manifest.yml, it triggers a major version upgrade and requires administrators of existing installations to review and approve the update.
During the EAP, you are blocked from deploying your app to the production environment and cannot list the app publicly on Marketplace.
The @forge/llm SDK gives you a lightweight, purpose-built client for invoking Atlassian-hosted LLMs directly from Forge runtime functions. Use chat() for structured multi-turn exchanges. Provide 'tool' definitions so the model can call typed functions, and inspect returned usage to guide adaptive behaviour.
For runnable examples (tool wiring, retries, error handling), see the Forge LLMs tutorials and example apps section above.
1 2import { chat } from '@forge/llm'; try { const response = await chat({ model: 'claude-3-7-sonnet-20250219', messages: [ { role: 'user', content: 'Write a short poem about Forge LLMs.' } ], }); console.log("#### LLM response:", JSON.stringify(response)); } catch (err) { console.error('#### LLM request failed:', { error: err.context?.responseText }); throw err; }
The SDK requires the llm module to be defined in your manifest.yml. If the SDK is used without declaring this module, linting will fail with an error like:
1 2Error: LLM package is used but 'llm' module is not defined in the manifest
The SDK can automatically fix your manifest. After linting, the manifest will include:
Example of corrected manifest.yml:
1 2modules: llm: - key: llm-app model: - claude
Please refer to the LLM module reference for details on how to define the module.
Please consult the @forge/llm package for the most up-to-date request schema definitions.
1 2interface Prompt { model: string; messages: { role: "system" | "user" | "assistant" | "tool"; content: string | ContentPart[]; }[]; max_completion_tokens?: number; temperature?: number; top_p?: number; tools?: { type: "function"; function: { name: string; description: string; parameters: object; }; }[]; tool_choice?: "auto" | "none" | "required" | { type: "function"; function: { name: string } }; }
The following request validation rules apply to specific models:
| Rule | Models |
|---|---|
When adjusting sampling parameters, modify either temperature or top_p. Do not modify both at the same time. | claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929 |
We plan to launch with support for three Claude variants: Sonnet, Opus, and Haiku. You choose the model per request, allowing you to balance latency, capability, and cost for each use case.
| Model ID | Variants | Family |
|---|---|---|
claude-3-5-haiku-20241022 | Haiku | Claude |
claude-3-7-sonnet-20250219 | Sonnet | Claude |
claude-opus-4-20250514 | Opus | Claude |
claude-sonnet-4-20250514 | Sonnet | Claude |
claude-sonnet-4-5-20250929 | Sonnet | Claude |
claude-haiku-4-5-20251001 | Haiku | Claude |
claude-opus-4-1-20250805 | Opus | Claude |
AI models evolve quickly, so specific versions may change before launch. Initially only text input/output is supported; multimodal support may be considered later.
Administrators will be informed (Marketplace listing and during installation) when an app uses Forge LLMs. Adding Forge LLMs—or a new model family—to an existing app triggers a major version upgrade requiring admin approval.
LLMs will become a paid Forge feature soon. Usage (token input/output volume) will appear in the developer console under usage and costs. Specific pricing will be published before preview.
Requests to Forge LLMs undergo the same moderation checks as Atlassian first‑party AI and Rovo features. High‑risk messages (per the Acceptable Use Policy) are blocked.
Rate this page: