Developer
Get Support
Sign in
Get Support
Sign in
DOCUMENTATION
Cloud
Data Center
Resources
Sign in
Sign in
DOCUMENTATION
Cloud
Data Center
Resources
Sign in
Runtimes
Web triggers
Async events
Dynamic Modules (EAP)
Atlassian app REST APIs
Fetch APIs
Last updated Nov 21, 2025

Forge LLMs API

Forge LLMs lets your Forge app call Atlassian‑hosted large language models (LLMs) to add secure AI features without leaving the Atlassian platform. Apps using this API are badged as Runs on Atlassian, indicating they leverage Atlassian’s security, compliance, and scalability. The API provides optimized, governed access to supported models so you can focus on creating innovative AI experiences while Atlassian handles model integration and infrastructure.

Manifest reference for LLM module

See the LLM module reference for details on the llm module for your manifest.yml.

Important:

The app retains its Runs on Atlassian eligibility after the module is added.

Versioning

The llm module is required to enable Forge LLMs. When you add the llm module to an app's manifest.yml, it triggers a major version upgrade and requires administrators of existing installations to review and approve the update.

EAP limitations

During the EAP, you are blocked from deploying your app to the production environment and cannot list the app publicly on Marketplace.

Tutorials and example apps

Node.js SDK

The @forge/llm SDK gives you a lightweight, purpose-built client for invoking Atlassian-hosted LLMs directly from Forge runtime functions. Use chat() for structured multi-turn exchanges. Provide 'tool' definitions so the model can call typed functions, and inspect returned usage to guide adaptive behaviour.

For runnable examples (tool wiring, retries, error handling), see the Forge LLMs tutorials and example apps section above.

Usage

1
2
import { chat } from '@forge/llm';
try {
  const response = await chat({
    model: 'claude-3-7-sonnet-20250219',
    messages: [
      {
        role: 'user', content: 'Write a short poem about Forge LLMs.'
      }
    ],
  });

  console.log("#### LLM response:", JSON.stringify(response));
} catch (err) {
  console.error('#### LLM request failed:', { error:  err.context?.responseText });
  throw err;
}

Module validation

The SDK requires the llm module to be defined in your manifest.yml. If the SDK is used without declaring this module, linting will fail with an error like:

1
2
Error: LLM package is used but 'llm' module is not defined in the manifest

The SDK can automatically fix your manifest. After linting, the manifest will include:

Example of corrected manifest.yml:

1
2
modules:
  llm:
    - key: llm-app
      model:
        - claude

Please refer to the LLM module reference for details on how to define the module.

Request schemas

1
2
interface Prompt {
  model: string;
  messages: {
    role: "system" | "user" | "assistant" | "tool";
    content: string | ContentPart[];
  }[];
  max_completion_tokens?: number;
  temperature?: number;
  top_p?: number;
  tools?: {
    type: "function";
    function: {
      name: string;
      description: string;
      parameters: object;
    };
  }[];
  tool_choice?: "auto" | "none" | "required" | { type: "function"; function: { name: string } };
}

Important validation rules

The following request validation rules apply to specific models:

RuleModels
When adjusting sampling parameters, modify either temperature or top_p. Do not modify both at the same time.claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929

Model selection

We plan to launch with support for three Claude variants: Sonnet, Opus, and Haiku. You choose the model per request, allowing you to balance latency, capability, and cost for each use case.

Supported models

Model IDVariantsFamily
claude-3-5-haiku-20241022HaikuClaude
claude-3-7-sonnet-20250219SonnetClaude
claude-opus-4-20250514OpusClaude
claude-sonnet-4-20250514SonnetClaude
claude-sonnet-4-5-20250929SonnetClaude
claude-haiku-4-5-20251001HaikuClaude
claude-opus-4-1-20250805OpusClaude

Claude - Opus

  • Most capable (best for complex, deep reasoning tasks)
  • Slowest (higher latency due to depth)
  • Highest cost

Claude - Sonnet

  • Balanced capability
  • Moderate speed
  • Moderate cost

Claude - Haiku

  • Fast and efficient (best for lightweight or high‑volume tasks)
  • Lowest cost

AI models evolve quickly, so specific versions may change before launch. Initially only text input/output is supported; multimodal support may be considered later.

Admin experience

Administrators will be informed (Marketplace listing and during installation) when an app uses Forge LLMs. Adding Forge LLMs—or a new model family—to an existing app triggers a major version upgrade requiring admin approval.

Pricing

LLMs will become a paid Forge feature soon. Usage (token input/output volume) will appear in the developer console under usage and costs. Specific pricing will be published before preview.

Responsible AI

Requests to Forge LLMs undergo the same moderation checks as Atlassian first‑party AI and Rovo features. High‑risk messages (per the Acceptable Use Policy) are blocked.

Rate this page: