<aside> ⚠️
Switching AI language models (LLMs) is performed on a per-project basis and applies to all user agents within that project. (Function available to team owners/administrators only)
</aside>
<aside> ⚠️
If you have contracted a self-hosted environment with a partner, the available AI language models (LLMs) for switching may differ. Please contact your company’s CS representative for details.
This page describes the AI language models (LLMs) available for use with Jitera SaaS.
</aside>
Switching AI language models (LLMs) is performed on a per-project basis and applies to all user agents within that project. (Function available to team owners/administrators only)
| AI Language Model (LLM) | Cloud Platform | Host Region | Model Overview |
|---|---|---|---|
| GPT-4.1 | Azure | Japan | Latest OpenAI (successor to GPT-4o). Handles up to 1M tokens with ultra-long context, improved coding & instruction-following performance, up to 40% faster and 80% lower cost for equivalent tasks. |
| OpenAI o3 | Azure | Japan | OpenAI’s cutting-edge “reasoning” model. Analyzes complex inputs including images, supports Web, Python, image generation, etc. |
| GPT-4o | Azure | Japan | OpenAI’s multimodal “Omni” model. Fast and capable of processing both text and images. |
| OpenAI o1 | Azure | Sweden | OpenAI’s first “Omni” model. Specialized for lightweight and fast text generation. |
| OpenAI o3-mini | Azure | Sweden | Compact version of gpt-o3. High speed with reduced resource consumption while maintaining accuracy. |
| Claude-3.5-sonnet | AWS Bedrock | Japan | Anthropic Claude v3.5 “Sonnet” version. Lightweight, excellent in safety and conversation quality. |
| Claude-3.7-sonnet | AWS Bedrock | Japan | Claude v3.7 “Sonnet” version. Achieves creative expression and natural dialogue with high precision. |
| Claude-4.0-sonnet | AWS Bedrock | Japan | Balanced model of Claude v4. Combines high performance and efficiency, suitable for a wide range of tasks from daily business to agent coding. |
| Gemini-2.0-flash | Global | Google Gemini v2.0 “Flash” version. Minimal parameters focused on ultra-fast response. |