mirror of
https://github.com/ollama/ollama.git
synced 2026-01-22 06:20:00 -05:00
Compare commits
1 Commits
llama-upda
...
brucemacd/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cc3ac5fee3 |
@@ -2,7 +2,7 @@
|
||||
title: Claude Code
|
||||
---
|
||||
|
||||
Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory.
|
||||
Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory.
|
||||
|
||||
Open models can be used with Claude Code through Ollama's Anthropic-compatible API, enabling you to use models such as `qwen3-coder`, `gpt-oss:20b`, or other models.
|
||||
|
||||
@@ -26,6 +26,16 @@ irm https://claude.ai/install.ps1 | iex
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
Configure Claude Code to use Ollama:
|
||||
|
||||
```shell
|
||||
ollama config claude
|
||||
```
|
||||
|
||||
This will prompt you to select a model and automatically configure Claude Code to use Ollama.
|
||||
|
||||
<Accordion title="Manual Configuration">
|
||||
|
||||
Claude Code connects to Ollama using the Anthropic-compatible API.
|
||||
|
||||
1. Set the environment variables:
|
||||
@@ -47,7 +57,9 @@ Or run with environment variables inline:
|
||||
ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 claude --model gpt-oss:20b
|
||||
```
|
||||
|
||||
**Note:** Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama.
|
||||
</Accordion>
|
||||
|
||||
<Note>Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama.</Note>
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
@@ -75,4 +87,4 @@ claude --model glm-4.7:cloud
|
||||
### Local models
|
||||
- `qwen3-coder` - Excellent for coding tasks
|
||||
- `gpt-oss:20b` - Strong general-purpose model
|
||||
- `gpt-oss:120b` - Larger general-purpose model for more complex tasks
|
||||
- `gpt-oss:120b` - Larger general-purpose model for more complex tasks
|
||||
|
||||
@@ -2,22 +2,31 @@
|
||||
title: Codex
|
||||
---
|
||||
|
||||
Codex is OpenAI's agentic coding tool for the command line.
|
||||
|
||||
## Install
|
||||
|
||||
Install the [Codex CLI](https://developers.openai.com/codex/cli/):
|
||||
|
||||
```
|
||||
```shell
|
||||
npm install -g @openai/codex
|
||||
```
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
<Note>Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.</Note>
|
||||
Configure Codex to use Ollama:
|
||||
|
||||
```shell
|
||||
ollama config codex
|
||||
```
|
||||
|
||||
This will prompt you to select a model and automatically configure Codex to use Ollama.
|
||||
|
||||
<Accordion title="Manual Configuration">
|
||||
|
||||
To use `codex` with Ollama, use the `--oss` flag:
|
||||
|
||||
```
|
||||
```shell
|
||||
codex --oss
|
||||
```
|
||||
|
||||
@@ -25,20 +34,22 @@ codex --oss
|
||||
|
||||
By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag:
|
||||
|
||||
```
|
||||
```shell
|
||||
codex --oss -m gpt-oss:120b
|
||||
```
|
||||
|
||||
### Cloud Models
|
||||
|
||||
```
|
||||
```shell
|
||||
codex --oss -m gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Note>Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.</Note>
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
|
||||
Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
||||
|
||||
To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com.
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
title: Droid
|
||||
---
|
||||
|
||||
Droid is Factory's agentic coding tool for the command line.
|
||||
|
||||
## Install
|
||||
|
||||
@@ -11,66 +12,80 @@ Install the [Droid CLI](https://factory.ai/):
|
||||
curl -fsSL https://app.factory.ai/cli | sh
|
||||
```
|
||||
|
||||
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
Add a local configuration block to `~/.factory/config.json`:
|
||||
Configure Droid to use Ollama:
|
||||
|
||||
```shell
|
||||
ollama config droid
|
||||
```
|
||||
|
||||
This will prompt you to select models and automatically configure Droid to use Ollama.
|
||||
|
||||
<Accordion title="Manual Configuration">
|
||||
|
||||
Add a local configuration block to `~/.factory/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
"customModels": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama]",
|
||||
"model": "qwen3-coder",
|
||||
"base_url": "http://localhost:11434/v1/",
|
||||
"api_key": "not-needed",
|
||||
"displayName": "qwen3-coder [Ollama]",
|
||||
"baseUrl": "http://localhost:11434/v1",
|
||||
"apiKey": "ollama",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 32000
|
||||
"maxOutputTokens": 32000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Adjust `maxOutputTokens` based on your model's context length (the automated setup detects this automatically).
|
||||
|
||||
### Cloud Models
|
||||
|
||||
## Cloud Models
|
||||
`qwen3-coder:480b-cloud` is the recommended model for use with Droid.
|
||||
|
||||
Add the cloud configuration block to `~/.factory/config.json`:
|
||||
Add the cloud configuration block to `~/.factory/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
"customModels": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
||||
"model": "qwen3-coder:480b-cloud",
|
||||
"base_url": "http://localhost:11434/v1/",
|
||||
"api_key": "not-needed",
|
||||
"displayName": "qwen3-coder:480b-cloud [Ollama]",
|
||||
"baseUrl": "http://localhost:11434/v1",
|
||||
"apiKey": "ollama",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 128000
|
||||
"maxOutputTokens": 128000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
||||
2. Add the cloud configuration block to `~/.factory/config.json`:
|
||||
2. Add the cloud configuration block to `~/.factory/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
"customModels": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
||||
"model": "qwen3-coder:480b",
|
||||
"base_url": "https://ollama.com/v1/",
|
||||
"api_key": "OLLAMA_API_KEY",
|
||||
"displayName": "qwen3-coder:480b [Ollama Cloud]",
|
||||
"baseUrl": "https://ollama.com/v1",
|
||||
"apiKey": "OLLAMA_API_KEY",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 128000
|
||||
"maxOutputTokens": 128000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Run `droid` in a new terminal to load the new settings.
|
||||
Run `droid` in a new terminal to load the new settings.
|
||||
|
||||
63
docs/integrations/opencode.mdx
Normal file
63
docs/integrations/opencode.mdx
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: OpenCode
|
||||
---
|
||||
|
||||
OpenCode is an agentic coding tool for the terminal.
|
||||
|
||||
## Install
|
||||
|
||||
Install [OpenCode](https://opencode.ai):
|
||||
|
||||
```shell
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
Configure OpenCode to use Ollama:
|
||||
|
||||
```shell
|
||||
ollama config opencode
|
||||
```
|
||||
|
||||
This will prompt you to select models and automatically configure OpenCode to use Ollama.
|
||||
|
||||
<Accordion title="Manual Configuration">
|
||||
|
||||
Add the Ollama provider to `~/.config/opencode/opencode.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://opencode.ai/config.json",
|
||||
"provider": {
|
||||
"ollama": {
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"name": "Ollama (local)",
|
||||
"options": {
|
||||
"baseURL": "http://localhost:11434/v1"
|
||||
},
|
||||
"models": {
|
||||
"qwen3-coder": {
|
||||
"name": "qwen3-coder [Ollama]"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Note>OpenCode requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
## Recommended Models
|
||||
|
||||
### Cloud models
|
||||
- `qwen3-coder:480b` - Large coding model
|
||||
- `glm-4.7:cloud` - High-performance cloud model
|
||||
- `minimax-m2.1:cloud` - Fast cloud model
|
||||
|
||||
### Local models
|
||||
- `qwen3-coder` - Excellent for coding tasks
|
||||
- `gpt-oss:20b` - Strong general-purpose model
|
||||
- `gpt-oss:120b` - Larger general-purpose model for more complex tasks
|
||||
Reference in New Issue
Block a user