Compare commits

...

1 Commits

Author SHA1 Message Date
Bruce MacDonald
cc3ac5fee3 docs: update instructions for ollama config command
These tools can be automatically configured using the new ollama config command
2026-01-21 17:03:41 -08:00
4 changed files with 132 additions and 31 deletions

View File

@@ -26,6 +26,16 @@ irm https://claude.ai/install.ps1 | iex
## Usage with Ollama ## Usage with Ollama
Configure Claude Code to use Ollama:
```shell
ollama config claude
```
This will prompt you to select a model and automatically configure Claude Code to use Ollama.
<Accordion title="Manual Configuration">
Claude Code connects to Ollama using the Anthropic-compatible API. Claude Code connects to Ollama using the Anthropic-compatible API.
1. Set the environment variables: 1. Set the environment variables:
@@ -47,7 +57,9 @@ Or run with environment variables inline:
ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 claude --model gpt-oss:20b ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 claude --model gpt-oss:20b
``` ```
**Note:** Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama. </Accordion>
<Note>Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama.</Note>
## Connecting to ollama.com ## Connecting to ollama.com

View File

@@ -2,22 +2,31 @@
title: Codex title: Codex
--- ---
Codex is OpenAI's agentic coding tool for the command line.
## Install ## Install
Install the [Codex CLI](https://developers.openai.com/codex/cli/): Install the [Codex CLI](https://developers.openai.com/codex/cli/):
``` ```shell
npm install -g @openai/codex npm install -g @openai/codex
``` ```
## Usage with Ollama ## Usage with Ollama
<Note>Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.</Note> Configure Codex to use Ollama:
```shell
ollama config codex
```
This will prompt you to select a model and automatically configure Codex to use Ollama.
<Accordion title="Manual Configuration">
To use `codex` with Ollama, use the `--oss` flag: To use `codex` with Ollama, use the `--oss` flag:
``` ```shell
codex --oss codex --oss
``` ```
@@ -25,20 +34,22 @@ codex --oss
By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag: By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag:
``` ```shell
codex --oss -m gpt-oss:120b codex --oss -m gpt-oss:120b
``` ```
### Cloud Models ### Cloud Models
``` ```shell
codex --oss -m gpt-oss:120b-cloud codex --oss -m gpt-oss:120b-cloud
``` ```
</Accordion>
<Note>Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.</Note>
## Connecting to ollama.com ## Connecting to ollama.com
Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com. To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com.

View File

@@ -2,6 +2,7 @@
title: Droid title: Droid
--- ---
Droid is Factory's agentic coding tool for the command line.
## Install ## Install
@@ -11,63 +12,77 @@ Install the [Droid CLI](https://factory.ai/):
curl -fsSL https://app.factory.ai/cli | sh curl -fsSL https://app.factory.ai/cli | sh
``` ```
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
## Usage with Ollama ## Usage with Ollama
Add a local configuration block to `~/.factory/config.json`: Configure Droid to use Ollama:
```shell
ollama config droid
```
This will prompt you to select models and automatically configure Droid to use Ollama.
<Accordion title="Manual Configuration">
Add a local configuration block to `~/.factory/settings.json`:
```json ```json
{ {
"custom_models": [ "customModels": [
{ {
"model_display_name": "qwen3-coder [Ollama]",
"model": "qwen3-coder", "model": "qwen3-coder",
"base_url": "http://localhost:11434/v1/", "displayName": "qwen3-coder [Ollama]",
"api_key": "not-needed", "baseUrl": "http://localhost:11434/v1",
"apiKey": "ollama",
"provider": "generic-chat-completion-api", "provider": "generic-chat-completion-api",
"max_tokens": 32000 "maxOutputTokens": 32000
} }
] ]
} }
``` ```
Adjust `maxOutputTokens` based on your model's context length (the automated setup detects this automatically).
### Cloud Models
## Cloud Models
`qwen3-coder:480b-cloud` is the recommended model for use with Droid. `qwen3-coder:480b-cloud` is the recommended model for use with Droid.
Add the cloud configuration block to `~/.factory/config.json`: Add the cloud configuration block to `~/.factory/settings.json`:
```json ```json
{ {
"custom_models": [ "customModels": [
{ {
"model_display_name": "qwen3-coder [Ollama Cloud]",
"model": "qwen3-coder:480b-cloud", "model": "qwen3-coder:480b-cloud",
"base_url": "http://localhost:11434/v1/", "displayName": "qwen3-coder:480b-cloud [Ollama]",
"api_key": "not-needed", "baseUrl": "http://localhost:11434/v1",
"apiKey": "ollama",
"provider": "generic-chat-completion-api", "provider": "generic-chat-completion-api",
"max_tokens": 128000 "maxOutputTokens": 128000
} }
] ]
} }
``` ```
</Accordion>
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
## Connecting to ollama.com ## Connecting to ollama.com
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`. 1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
2. Add the cloud configuration block to `~/.factory/config.json`: 2. Add the cloud configuration block to `~/.factory/settings.json`:
```json ```json
{ {
"custom_models": [ "customModels": [
{ {
"model_display_name": "qwen3-coder [Ollama Cloud]",
"model": "qwen3-coder:480b", "model": "qwen3-coder:480b",
"base_url": "https://ollama.com/v1/", "displayName": "qwen3-coder:480b [Ollama Cloud]",
"api_key": "OLLAMA_API_KEY", "baseUrl": "https://ollama.com/v1",
"apiKey": "OLLAMA_API_KEY",
"provider": "generic-chat-completion-api", "provider": "generic-chat-completion-api",
"max_tokens": 128000 "maxOutputTokens": 128000
} }
] ]
} }

View File

@@ -0,0 +1,63 @@
---
title: OpenCode
---
OpenCode is an agentic coding tool for the terminal.
## Install
Install [OpenCode](https://opencode.ai):
```shell
curl -fsSL https://opencode.ai/install | bash
```
## Usage with Ollama
Configure OpenCode to use Ollama:
```shell
ollama config opencode
```
This will prompt you to select models and automatically configure OpenCode to use Ollama.
<Accordion title="Manual Configuration">
Add the Ollama provider to `~/.config/opencode/opencode.json`:
```json
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3-coder": {
"name": "qwen3-coder [Ollama]"
}
}
}
}
}
```
</Accordion>
<Note>OpenCode requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
## Recommended Models
### Cloud models
- `qwen3-coder:480b` - Large coding model
- `glm-4.7:cloud` - High-performance cloud model
- `minimax-m2.1:cloud` - Fast cloud model
### Local models
- `qwen3-coder` - Excellent for coding tasks
- `gpt-oss:20b` - Strong general-purpose model
- `gpt-oss:120b` - Larger general-purpose model for more complex tasks