Compare commits

...

13 Commits

Author SHA1 Message Date
Parth Sareen
598b74d42c cmd/config: add minimax-m2.5 (#14223) 2026-02-12 14:29:50 -08:00
Jeffrey Morgan
935a48ed1a scripts: skip macOS symlink creation if already correct (#14142) 2026-02-12 12:44:42 -08:00
Daniel Hiltgen
de39e24bf7 win: progress reporting on install download (#14219)
* win: progress reporting on install download

Downloading Ollama...
  [####################################    ] 91%  1106.6 / 1204.2 MB

* review comments
2026-02-12 12:06:56 -08:00
Eva H
519b11eba1 site: update readme (#14217) 2026-02-12 12:14:13 -05:00
Eva H
379fd64fa8 Revert "update README (#14213)" (#14215) 2026-02-12 12:06:00 -05:00
frob
59c019a6fb x: configurable model load timeout (#14204)
Co-authored-by: rick <rick@frob.com.au>
2026-02-12 09:05:42 -08:00
Eva H
fad3bcccb2 update README (#14213) 2026-02-12 11:59:42 -05:00
Bruce MacDonald
bd6697ad95 docs: update quickstart for tui (#14208) 2026-02-12 08:44:33 -08:00
SamareshSingh
f8dc7c9f54 docs: fix openapi schema for /api/ps and /api/tags endpoints (#14210) 2026-02-11 17:37:40 -08:00
Patrick Devine
4a3741129d bug: fix loading non-mlx models when ollama is built with mlx support (#14211)
This change fixes an issue where GGML based models (for either the Ollama runner or
the legacy llama.cpp runner) would try to load the mlx library. That would panic
and the model fails to start.
2026-02-11 14:48:33 -08:00
Parth Sareen
77ba9404ac cmd/tui: improve model picker UX (#14209) 2026-02-11 14:36:54 -08:00
Patrick Devine
0aaf6119ec feature: add ctrl-g to allow users to use an editor to edit their prompt (#14197) 2026-02-11 13:04:41 -08:00
Parth Sareen
f08427c138 cmd: TUI UX improvements (#14198) 2026-02-11 10:18:41 -08:00
29 changed files with 3235 additions and 2787 deletions

912
README.md
View File

@@ -1,20 +1,30 @@
<div align="center">
  <a href="https://ollama.com">
<img alt="ollama" width="240" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
<p align="center">
<a href="https://ollama.com">
<img src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7" alt="ollama" width="200"/>
</a>
</div>
</p>
# Ollama
Get up and running with large language models.
Start building with open models.
## Download
### macOS
[Download](https://ollama.com/download/Ollama.dmg)
```shell
curl -fsSL https://ollama.com/install.sh | sh
```
or [download manually](http://localhost:8080/download/Ollama.dmg)
### Windows
[Download](https://ollama.com/download/OllamaSetup.exe)
```shell
irm https://ollama.com/install.ps1 | iex
```
or [download manually](https://ollama.com/download/OllamaSetup.exe)
### Linux
@@ -36,649 +46,311 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla
### Community
- [Discord](https://discord.gg/ollama)
- [𝕏 (Twitter)](https://x.com/ollama)
- [Reddit](https://reddit.com/r/ollama)
## Quickstart
## Get started
To run and chat with [Gemma 3](https://ollama.com/library/gemma3):
```
ollama
```
```shell
You'll be prompted to run a model or connect Ollama to your existing agents or applications such as `claude`, `codex`, `openclaw` and more.
### Coding
To launch a specific integration:
```
ollama launch claude
```
Supported integrations include [Claude Code](https://docs.ollama.com/integrations/claude-code), [Codex](https://docs.ollama.com/integrations/codex), [Droid](https://docs.ollama.com/integrations/droid), and [OpenCode](https://docs.ollama.com/integrations/opencode).
### AI assistant
Use [OpenClaw](https://docs.ollama.com/integrations/openclaw) to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:
```
ollama launch openclaw
```
### Chat with a model
Run and chat with [Gemma 3](https://ollama.com/library/gemma3):
```
ollama run gemma3
```
## Model library
See [ollama.com/library](https://ollama.com/library) for the full list.
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library "ollama model library")
Here are some example models that can be downloaded:
| Model | Parameters | Size | Download |
| ------------------ | ---------- | ----- | -------------------------------- |
| Gemma 3 | 1B | 815MB | `ollama run gemma3:1b` |
| Gemma 3 | 4B | 3.3GB | `ollama run gemma3` |
| Gemma 3 | 12B | 8.1GB | `ollama run gemma3:12b` |
| Gemma 3 | 27B | 17GB | `ollama run gemma3:27b` |
| QwQ | 32B | 20GB | `ollama run qwq` |
| DeepSeek-R1 | 7B | 4.7GB | `ollama run deepseek-r1` |
| DeepSeek-R1 | 671B | 404GB | `ollama run deepseek-r1:671b` |
| Llama 4 | 109B | 67GB | `ollama run llama4:scout` |
| Llama 4 | 400B | 245GB | `ollama run llama4:maverick` |
| Llama 3.3 | 70B | 43GB | `ollama run llama3.3` |
| Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` |
| Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` |
| Llama 3.2 Vision | 11B | 7.9GB | `ollama run llama3.2-vision` |
| Llama 3.2 Vision | 90B | 55GB | `ollama run llama3.2-vision:90b` |
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
| Phi 4 | 14B | 9.1GB | `ollama run phi4` |
| Phi 4 Mini | 3.8B | 2.5GB | `ollama run phi4-mini` |
| Mistral | 7B | 4.1GB | `ollama run mistral` |
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
| LLaVA | 7B | 4.5GB | `ollama run llava` |
| Granite-3.3 | 8B | 4.9GB | `ollama run granite3.3` |
> [!NOTE]
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
## Customize a model
### Import from GGUF
Ollama supports importing GGUF models in the Modelfile:
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
```
FROM ./vicuna-33b.Q4_0.gguf
```
2. Create the model in Ollama
```shell
ollama create example -f Modelfile
```
3. Run the model
```shell
ollama run example
```
### Import from Safetensors
See the [guide](https://docs.ollama.com/import) on importing models for more information.
### Customize a prompt
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:
```shell
ollama pull llama3.2
```
Create a `Modelfile`:
```
FROM llama3.2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
```
Next, create and run the model:
```
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
```
For more information on working with a Modelfile, see the [Modelfile](https://docs.ollama.com/modelfile) documentation.
## CLI Reference
### Create a model
`ollama create` is used to create a model from a Modelfile.
```shell
ollama create mymodel -f ./Modelfile
```
### Pull a model
```shell
ollama pull llama3.2
```
> This command can also be used to update a local model. Only the diff will be pulled.
### Remove a model
```shell
ollama rm llama3.2
```
### Copy a model
```shell
ollama cp llama3.2 my-model
```
### Multiline input
For multiline input, you can wrap text with `"""`:
```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```
### Multimodal models
```
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
```
> **Output**: The image features a yellow smiley face, which is likely the central focus of the picture.
### Pass the prompt as an argument
```shell
ollama run llama3.2 "Summarize this file: $(cat README.md)"
```
> **Output**: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
### Show model information
```shell
ollama show llama3.2
```
### List models on your computer
```shell
ollama list
```
### List which models are currently loaded
```shell
ollama ps
```
### Stop a model which is currently running
```shell
ollama stop llama3.2
```
### Generate embeddings from the CLI
```shell
ollama run embeddinggemma "Your text to embed"
```
You can also pipe text for scripted workflows:
```shell
echo "Your text to embed" | ollama run embeddinggemma
```
### Start Ollama
`ollama serve` is used when you want to start ollama without running the desktop application.
## Building
See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/development.md)
### Running local builds
Next, start the server:
```shell
./ollama serve
```
Finally, in a separate shell, run a model:
```shell
./ollama run llama3.2
```
## Building with MLX (experimental)
First build the MLX libraries:
```shell
cmake --preset MLX
cmake --build --preset MLX --parallel
cmake --install build --component MLX
```
When building with the `-tags mlx` flag, the main `ollama` binary includes MLX support for experimental features like image generation:
```shell
go build -tags mlx .
```
Finally, start the server:
```
./ollama serve
```
### Building MLX with CUDA
When building with CUDA, use the preset "MLX CUDA 13" or "MLX CUDA 12" to enable CUDA with default architectures:
```shell
cmake --preset 'MLX CUDA 13'
cmake --build --preset 'MLX CUDA 13' --parallel
cmake --install build --component MLX
```
See the [quickstart guide](https://docs.ollama.com/quickstart) for more details.
## REST API
Ollama has a REST API for running and managing models.
### Generate a response
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt":"Why is the sky blue?"
}'
```
### Chat with a model
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
"model": "gemma3",
"messages": [{
"role": "user",
"content": "Why is the sky blue?"
}],
"stream": false
}'
```
See the [API documentation](./docs/api.md) for all endpoints.
See the [API documentation](https://docs.ollama.com/api) for all endpoints.
### Python
```
pip install ollama
```
```python
from ollama import chat
response = chat(model='gemma3', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response.message.content)
```
### JavaScript
```
npm i ollama
```
```javascript
import ollama from "ollama";
const response = await ollama.chat({
model: "gemma3",
messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);
```
## Supported backends
- [llama.cpp](https://github.com/ggml-org/llama.cpp) project founded by Georgi Gerganov.
## Documentation
- [CLI reference](https://docs.ollama.com/cli)
- [REST API reference](https://docs.ollama.com/api)
- [Importing models](https://docs.ollama.com/import)
- [Modelfile reference](https://docs.ollama.com/modelfile)
- [Building from source](https://github.com/ollama/ollama/blob/main/docs/development.md)
## Community Integrations
### Web & Desktop
> Want to add your project? Open a pull request.
- [Onyx](https://github.com/onyx-dot-app/onyx)
- [Open WebUI](https://github.com/open-webui/open-webui)
- [SwiftChat (macOS with ReactNative)](https://github.com/aws-samples/swift-chat)
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
- [Hollama](https://github.com/fmaclen/hollama)
- [Lollms WebUI (Single user)](https://github.com/ParisNeo/lollms-webui)
- [Lollms (Multi users)](https://github.com/ParisNeo/lollms)
- [LibreChat](https://github.com/danny-avila/LibreChat)
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
- [AI-UI](https://github.com/bajahaw/ai-ui)
- [Saddle](https://github.com/jikkuatwork/saddle)
- [TagSpaces](https://www.tagspaces.org) (A platform for file-based apps, [utilizing Ollama](https://docs.tagspaces.org/ai/) for the generation of tags and descriptions)
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
- [Chatbot UI v2](https://github.com/mckaywrigley/chatbot-ui)
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
- [big-AGI](https://github.com/enricoros/big-AGI)
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
- [Amica](https://github.com/semperai/amica)
- [chatd](https://github.com/BruceMacD/chatd)
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)
- [Dify.AI](https://github.com/langgenius/dify)
- [MindMac](https://mindmac.app)
- [NextJS Web Interface for Ollama](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)
- [Msty](https://msty.app)
- [Chatbox](https://github.com/Bin-Huang/Chatbox)
- [WinForm Ollama Copilot](https://github.com/tgraupmann/WinForm_Ollama_Copilot)
- [NextChat](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web) with [Get Started Doc](https://docs.nextchat.dev/models/ollama)
- [Alpaca WebUI](https://github.com/mmo80/alpaca-webui)
- [OllamaGUI](https://github.com/enoch1118/ollamaGUI)
- [OpenAOE](https://github.com/InternLM/OpenAOE)
- [Odin Runes](https://github.com/leonid20000/OdinRunes)
- [LLM-X](https://github.com/mrdjohnson/llm-x) (Progressive Web App)
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
- [Screenpipe](https://github.com/mediar-ai/screenpipe) (24/7 screen & mic recording with AI-powered search, uses Ollama for local LLM features)
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
- [IntelliBar](https://intellibar.app/) (AI-powered assistant for macOS)
- [Jirapt](https://github.com/AliAhmedNada/jirapt) (Jira Integration to generate issues, tasks, epics)
- [ojira](https://github.com/AliAhmedNada/ojira) (Jira chrome plugin to easily generate descriptions for tasks)
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Interactive chat tool that can leverage Ollama models for rapid understanding and navigation of GitHub code repositories)
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- [StreamDeploy](https://github.com/StreamDeploy-DevRel/streamdeploy-llm-app-scaffold) (LLM Application Scaffold)
- [chat](https://github.com/swuecho/chat) (chat web app for teams)
- [Lobe Chat](https://github.com/lobehub/lobe-chat) with [Integrating Doc](https://lobehub.com/docs/self-hosting/examples/ollama)
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
- [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) (RWKV offline LLM deployment tool, also usable as a client for ChatGPT and Ollama)
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) (app to evaluate and compare models)
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
- [Casibase](https://casibase.org) (An open source AI knowledge base and dialogue system combining the latest RAG, SSO, ollama support, and multiple large language models.)
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
- [Shinkai Desktop](https://github.com/dcSpark/shinkai-apps) (Two click install Local AI using Ollama + Files + RAG)
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in Discord)
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
- [R2R](https://github.com/SciPhi-AI/R2R) (Open-source RAG engine)
- [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy-to-use GUI with sample custom LLM for Drivers Education)
- [OpenGPA](https://opengpa.org) (Open-source offline-first Enterprise Agentic Application)
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
- [Sidellama](https://github.com/gyopak/sidellama) (browser-based LLM client)
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
- [PyGPT](https://github.com/szczyglis-dev/py-gpt) (AI desktop assistant for Linux, Windows, and Mac)
- [Alpaca](https://github.com/Jeffser/Alpaca) (An Ollama client application for Linux and macOS made with GTK4 and Adwaita)
- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT/blob/master/docs/content/platform/ollama.md) (AutoGPT Ollama integration)
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot, and Ollama4j
- [PyOllaMx](https://github.com/kspviswa/pyOllaMx) - macOS application capable of chatting with both Ollama and Apple MLX models.
- [Cline](https://github.com/cline/cline) - Formerly known as Claude Dev is a VS Code extension for multi-file/whole-repo coding
- [Void](https://github.com/voideditor/void) (Open source AI code editor and Cursor alternative)
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) (Desktop client with Ollama support)
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
- [Tkinter-based client](https://github.com/chyok/ollama-gui) (Python tkinter-based Client for Ollama)
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
- [Local Multimodal AI Chat](https://github.com/Leon-Sander/Local-Multimodal-AI-Chat) (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG and deep research on Mac/Windows/Linux)
- [OrionChat](https://github.com/EliasPereirah/OrionChat) - OrionChat is a web interface for chatting with different AI providers
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
- [Web management](https://github.com/lemonit-eric-mao/ollama-web-management) (Web management page)
- [Promptery](https://github.com/promptery/promptery) (desktop client for Ollama.)
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
- [chat-ollama](https://github.com/annilq/chat-ollama) (a React Native client for Ollama)
- [SpaceLlama](https://github.com/tcsenpai/spacellama) (Firefox and Chrome extension to quickly summarize web pages with ollama in a sidebar)
- [YouLama](https://github.com/tcsenpai/youlama) (Webapp to quickly summarize any YouTube video, supporting Invidious as well)
- [DualMind](https://github.com/tcsenpai/dualmind) (Experimental app allowing two models to talk to each other in the terminal or in a web interface)
- [ollamarama-matrix](https://github.com/h1ddenpr0cess20/ollamarama-matrix) (Ollama chatbot for the Matrix chat protocol)
- [ollama-chat-app](https://github.com/anan1213095357/ollama-chat-app) (Flutter-based chat app)
- [Perfect Memory AI](https://www.perfectmemory.ai/) (Productivity AI assists personalized by what you have seen on your screen, heard, and said in the meetings)
- [Hexabot](https://github.com/hexastack/hexabot) (A conversational AI builder)
- [Reddit Rate](https://github.com/rapidarchitect/reddit_analyzer) (Search and Rate Reddit topics with a weighted summation)
- [OpenTalkGpt](https://github.com/adarshM84/OpenTalkGpt) (Chrome Extension to manage open-source models supported by Ollama, create custom models, and chat with models from a user-friendly UI)
- [VT](https://github.com/vinhnx/vt.ai) (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
- [Nosia](https://github.com/nosia-ai/nosia) (Easy to install and use RAG platform based on Ollama)
- [Witsy](https://github.com/nbonamy/witsy) (An AI Desktop application available for Mac/Windows/Linux)
- [Abbey](https://github.com/US-Artificial-Intelligence/abbey) (A configurable AI interface server with notebooks, document storage, and YouTube support)
- [Minima](https://github.com/dmayboroda/minima) (RAG with on-premises or fully local workflow)
- [aidful-ollama-model-delete](https://github.com/AidfulAI/aidful-ollama-model-delete) (User interface for simplified model cleanup)
- [Perplexica](https://github.com/ItzCrazyKns/Perplexica) (An AI-powered search engine & an open-source alternative to Perplexity AI)
- [Ollama Chat WebUI for Docker ](https://github.com/oslook/ollama-webui) (Support for local docker deployment, lightweight ollama webui)
- [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VS Code extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
- [MinimalNextOllamaChat](https://github.com/anilkay/MinimalNextOllamaChat) (Minimal Web UI for Chat and Model Control)
- [Chipper](https://github.com/TilmanGriesel/chipper) AI interface for tinkerers (Ollama, Haystack RAG, Python)
- [ChibiChat](https://github.com/CosmicEventHorizon/ChibiChat) (Kotlin-based Android app to chat with Ollama and Koboldcpp API endpoints)
- [LocalLLM](https://github.com/qusaismael/localllm) (Minimal Web-App to run ollama models on it with a GUI)
- [Ollamazing](https://github.com/buiducnhat/ollamazing) (Web extension to run Ollama models)
- [OpenDeepResearcher-via-searxng](https://github.com/benhaotang/OpenDeepResearcher-via-searxng) (A Deep Research equivalent endpoint with Ollama support for running locally)
- [AntSK](https://github.com/AIDotNet/AntSK) (Out-of-the-box & Adaptable RAG Chatbot)
- [MaxKB](https://github.com/1Panel-dev/MaxKB/) (Ready-to-use & flexible RAG Chatbot)
- [yla](https://github.com/danielekp/yla) (Web interface to freely interact with your customized models)
- [LangBot](https://github.com/RockChinQ/LangBot) (LLM-based instant messaging bots platform, with Agents, RAG features, supports multiple platforms)
- [1Panel](https://github.com/1Panel-dev/1Panel/) (Web-based Linux Server Management Tool)
- [AstrBot](https://github.com/Soulter/AstrBot/) (User-friendly LLM-based multi-platform chatbot with a WebUI, supporting RAG, LLM agents, and plugins integration)
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
- [Flufy](https://github.com/Aharon-Bensadoun/Flufy) (A beautiful chat interface for interacting with Ollama's API. Built with React, TypeScript, and Material-UI.)
- [Ellama](https://github.com/zeozeozeo/ellama) (Friendly native app to chat with an Ollama instance)
- [screenpipe](https://github.com/mediar-ai/screenpipe) Build agents powered by your screen history
- [Ollamb](https://github.com/hengkysteen/ollamb) (Simple yet rich in features, cross-platform built with Flutter and designed for Ollama. Try the [web demo](https://hengkysteen.github.io/demo/ollamb/).)
- [Writeopia](https://github.com/Writeopia/Writeopia) (Text editor with integration with Ollama)
- [AppFlowy](https://github.com/AppFlowy-IO/AppFlowy) (AI collaborative workspace with Ollama, cross-platform and self-hostable)
- [Lumina](https://github.com/cushydigit/lumina.git) (A lightweight, minimal React.js frontend for interacting with Ollama servers)
- [Tiny Notepad](https://pypi.org/project/tiny-notepad) (A lightweight, notepad-like interface to chat with ollama available on PyPI)
- [macLlama (macOS native)](https://github.com/hellotunamayo/macLlama) (A native macOS GUI application for interacting with Ollama models, featuring a chat interface.)
- [GPTranslate](https://github.com/philberndt/GPTranslate) (A fast and lightweight, AI powered desktop translation application written with Rust and Tauri. Features real-time translation with OpenAI/Azure/Ollama.)
- [ollama launcher](https://github.com/NGC13009/ollama-launcher) (A launcher for Ollama, aiming to provide users with convenient functions such as ollama server launching, management, or configuration.)
- [ai-hub](https://github.com/Aj-Seven/ai-hub) (AI Hub supports multiple models via API keys and Chat support via Ollama API.)
- [Mayan EDMS](https://gitlab.com/mayan-edms/mayan-edms) (Open source document management system to organize, tag, search, and automate your files with powerful Ollama driven workflows.)
- [Serene Pub](https://github.com/doolijb/serene-pub) (Beginner friendly, open source AI Roleplaying App for Windows, Mac OS and Linux. Search, download and use models with Ollama all inside the app.)
- [Andes](https://github.com/aqerd/andes) (A Visual Studio Code extension that provides a local UI interface for Ollama models)
- [KDeps](https://github.com/kdeps/kdeps) (Kdeps is an offline-first AI framework for building Dockerized full-stack AI applications declaratively using Apple PKL and integrates APIs with Ollama on the backend.)
- [Clueless](https://github.com/KashyapTan/clueless) (Open Source & Local Cluely: A desktop application LLM assistant to help you talk to anything on your screen using locally served Ollama models. Also undetectable to screenshare)
- [ollama-co2](https://github.com/carbonatedWaterOrg/ollama-co2) (FastAPI web interface for monitoring and managing local and remote Ollama servers with real-time model monitoring and concurrent downloads)
- [Hillnote](https://hillnote.com) (A Markdown-first workspace designed to supercharge your AI workflow. Create documents ready to integrate with Claude, ChatGPT, Gemini, Cursor, and more - all while keeping your work on your device.)
- [Stakpak](https://github.com/stakpak/agent) (An open source, vendor neutral DevOps agent that works with any model, and any stack, for teams who just want to ship)
### Chat Interfaces
### Cloud
#### Web
- [Open WebUI](https://github.com/open-webui/open-webui) - Extensible, self-hosted AI interface
- [Onyx](https://github.com/onyx-dot-app/onyx) - Connected AI workspace
- [LibreChat](https://github.com/danny-avila/LibreChat) - Enhanced ChatGPT clone with multi-provider support
- [Lobe Chat](https://github.com/lobehub/lobe-chat) - Modern chat framework with plugin ecosystem ([docs](https://lobehub.com/docs/self-hosting/examples/ollama))
- [NextChat](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web) - Cross-platform ChatGPT UI ([docs](https://docs.nextchat.dev/models/ollama))
- [Perplexica](https://github.com/ItzCrazyKns/Perplexica) - AI-powered search engine, open-source Perplexity alternative
- [big-AGI](https://github.com/enricoros/big-AGI) - AI suite for professionals
- [Lollms WebUI](https://github.com/ParisNeo/lollms-webui) - Multi-model web interface
- [ChatOllama](https://github.com/sugarforever/chat-ollama) - Chatbot with knowledge bases
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt) - On-premise AI platform
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama) - ChatGPT-style web interface
- [Hollama](https://github.com/fmaclen/hollama) - Minimal web interface
- [Chatbox](https://github.com/Bin-Huang/Chatbox) - Desktop and web AI client
- [chat](https://github.com/swuecho/chat) - Chat web app for teams
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) - Chat with multiple PDFs using RAG
- [Tkinter-based client](https://github.com/chyok/ollama-gui) - Python desktop client
#### Desktop
- [Dify.AI](https://github.com/langgenius/dify) - LLM app development platform
- [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm) - All-in-one AI app for Mac, Windows, and Linux
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid) - Cross-platform mobile and desktop client
- [Witsy](https://github.com/nbonamy/witsy) - AI desktop app for Mac, Windows, and Linux
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) - Multi-provider desktop client
- [Ollama App](https://github.com/JHubi1/ollama-app) - Multi-platform client for desktop and mobile
- [PyGPT](https://github.com/szczyglis-dev/py-gpt) - AI desktop assistant for Linux, Windows, and Mac
- [Alpaca](https://github.com/Jeffser/Alpaca) - GTK4 client for Linux and macOS
- [SwiftChat](https://github.com/aws-samples/swift-chat) - Cross-platform including iOS, Android, and Apple Vision Pro
- [Enchanted](https://github.com/AugustDev/enchanted) - Native macOS and iOS client
- [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) - Multi-model desktop runner
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) - Evaluate and compare models
- [macai](https://github.com/Renset/macai) - macOS client for Ollama and ChatGPT
- [AI Studio](https://github.com/MindWorkAI/AI-Studio) - Multi-provider desktop IDE
- [Reins](https://github.com/ibrahimcetin/reins) - Parameter tuning and reasoning model support
- [ConfiChat](https://github.com/1runeberg/confichat) - Privacy-focused with optional encryption
- [LLocal.in](https://github.com/kartikm7/llocal) - Electron desktop client
- [MindMac](https://mindmac.app) - AI chat client for Mac
- [Msty](https://msty.app) - Multi-model desktop client
- [BoltAI for Mac](https://boltai.com) - AI chat client for Mac
- [IntelliBar](https://intellibar.app/) - AI-powered assistant for macOS
- [Kerlig AI](https://www.kerlig.com/) - AI writing assistant for macOS
- [Hillnote](https://hillnote.com) - Markdown-first AI workspace
- [Perfect Memory AI](https://www.perfectmemory.ai/) - Productivity AI personalized by screen and meeting history
#### Mobile
- [Ollama Android Chat](https://github.com/sunshine0523/OllamaServer) - One-click Ollama on Android
> SwiftChat, Enchanted, Maid, Ollama App, Reins, and ConfiChat listed above also support mobile platforms.
### Code Editors & Development
- [Cline](https://github.com/cline/cline) - VS Code extension for multi-file/whole-repo coding
- [Continue](https://github.com/continuedev/continue) - Open-source AI code assistant for any IDE
- [Void](https://github.com/voideditor/void) - Open source AI code editor, Cursor alternative
- [Copilot for Obsidian](https://github.com/logancyang/obsidian-copilot) - AI assistant for Obsidian
- [twinny](https://github.com/rjmacarthy/twinny) - Copilot and Copilot chat alternative
- [gptel Emacs client](https://github.com/karthink/gptel) - LLM client for Emacs
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) - Use Ollama as GitHub Copilot
- [Obsidian Local GPT](https://github.com/pfrankov/obsidian-local-gpt) - Local AI for Obsidian
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama) - LLM tool for Emacs
- [orbiton](https://github.com/xyproto/orbiton) - Config-free text editor with Ollama tab completion
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) - Sublime Text 4 AI assistant
- [VT Code](https://github.com/vinhnx/vtcode) - Rust-based terminal coding agent with Tree-sitter
- [QodeAssist](https://github.com/Palm1r/QodeAssist) - AI coding assistant for Qt Creator
- [AI Toolkit for VS Code](https://aka.ms/ai-tooklit/ollama-docs) - Microsoft-official VS Code extension
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama) - Natural language interface for computers
### Libraries & SDKs
- [LiteLLM](https://github.com/BerriAI/litellm) - Unified API for 100+ LLM providers
- [Semantic Kernel](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama) - Microsoft AI orchestration SDK
- [LangChain4j](https://github.com/langchain4j/langchain4j) - Java LangChain ([example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java))
- [LangChainGo](https://github.com/tmc/langchaingo/) - Go LangChain ([example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example))
- [Spring AI](https://github.com/spring-projects/spring-ai) - Spring framework AI support ([docs](https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html))
- [LangChain](https://python.langchain.com/docs/integrations/chat/ollama/) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
- [Ollama for Ruby](https://github.com/crmne/ruby_llm) - Ruby LLM library
- [any-llm](https://github.com/mozilla-ai/any-llm) - Unified LLM interface by Mozilla
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp) - .NET SDK
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) - Rust LangChain ([example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs))
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) - Java agent framework ([example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama))
- [Elixir LangChain](https://github.com/brainlid/langchain) - Elixir LangChain
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs) - Rust SDK
- [LangChain for .NET](https://github.com/tryAGI/LangChain) - .NET LangChain ([example](https://github.com/tryAGI/LangChain/blob/main/examples/LangChain.Samples.OpenAI/Program.cs))
- [chromem-go](https://github.com/philippgille/chromem-go) - Go vector database with Ollama embeddings ([example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama))
- [LangChainDart](https://github.com/davidmigloz/langchain_dart) - Dart LangChain
- [LlmTornado](https://github.com/lofcz/llmtornado) - Unified C# interface for multiple inference APIs
- [Ollama4j for Java](https://github.com/ollama4j/ollama4j) - Java SDK
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel) - Laravel integration
- [Ollama for Swift](https://github.com/mattt/ollama-swift) - Swift SDK
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama) - Data framework for LLM apps
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md) - AI pipeline framework
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama) - Google AI framework
- [Ollama-hpp for C++](https://github.com/jmont-dev/ollama-hpp) - C++ SDK
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) - Julia LLM toolkit ([example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama))
- [Ollama for R - rollama](https://github.com/JBGruber/rollama) - R SDK
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama) - AI gateway
- [Testcontainers](https://testcontainers.com/modules/ollama/) - Container-based testing
- [LLPhant](https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama) - PHP AI framework
### Frameworks & Agents
- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT/blob/master/docs/content/platform/ollama.md) - Autonomous AI agent platform
- [crewAI](https://github.com/crewAIInc/crewAI) - Multi-agent orchestration framework
- [Strands Agents](https://github.com/strands-agents/sdk-python) - Model-driven agent building by AWS
- [Cheshire Cat](https://github.com/cheshire-cat-ai/core) - AI assistant framework
- [any-agent](https://github.com/mozilla-ai/any-agent) - Unified agent framework interface by Mozilla
- [Stakpak](https://github.com/stakpak/agent) - Open source DevOps agent
- [Hexabot](https://github.com/hexastack/hexabot) - Conversational AI builder
- [Neuro SAN](https://github.com/cognizant-ai-lab/neuro-san-studio) - Multi-agent orchestration ([docs](https://github.com/cognizant-ai-lab/neuro-san-studio/blob/main/docs/user_guide.md#ollama))
### RAG & Knowledge Bases
- [RAGFlow](https://github.com/infiniflow/ragflow) - RAG engine based on deep document understanding
- [R2R](https://github.com/SciPhi-AI/R2R) - Open-source RAG engine
- [MaxKB](https://github.com/1Panel-dev/MaxKB/) - Ready-to-use RAG chatbot
- [Minima](https://github.com/dmayboroda/minima) - On-premises or fully local RAG
- [Chipper](https://github.com/TilmanGriesel/chipper) - AI interface with Haystack RAG
- [ARGO](https://github.com/xark-argo/argo) - RAG and deep research on Mac/Windows/Linux
- [Archyve](https://github.com/nickthecook/archyve) - RAG-enabling document library
- [Casibase](https://casibase.org) - AI knowledge base with RAG and SSO
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) - Native client with RAG and multi-agent automation
### Bots & Messaging
- [LangBot](https://github.com/RockChinQ/LangBot) - Multi-platform messaging bots with agents and RAG
- [AstrBot](https://github.com/Soulter/AstrBot/) - Multi-platform chatbot with RAG and plugins
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) - TypeScript Discord bot
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram) - Telegram bot
- [LLM Telegram Bot](https://github.com/innightwolfsleep/llm_telegram_bot) - Telegram bot for roleplay
### Terminal & CLI
- [aichat](https://github.com/sigoden/aichat) - All-in-one LLM CLI with Shell Assistant, RAG, and AI tools
- [oterm](https://github.com/ggozad/oterm) - Terminal client for Ollama
- [gollama](https://github.com/sammcj/gollama) - Go-based model manager for Ollama
- [tlm](https://github.com/yusufcanb/tlm) - Local shell copilot
- [tenere](https://github.com/pythops/tenere) - TUI for LLMs
- [ParLlama](https://github.com/paulrobello/parllama) - TUI for Ollama
- [llm-ollama](https://github.com/taketwo/llm-ollama) - Plugin for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/)
- [ShellOracle](https://github.com/djcopley/ShellOracle) - Shell command suggestions
- [LLM-X](https://github.com/mrdjohnson/llm-x) - Progressive web app for LLMs
- [cmdh](https://github.com/pgibler/cmdh) - Natural language to shell commands
- [VT](https://github.com/vinhnx/vt.ai) - Minimal multimodal AI chat app
### Productivity & Apps
- [AppFlowy](https://github.com/AppFlowy-IO/AppFlowy) - AI collaborative workspace, self-hostable Notion alternative
- [Screenpipe](https://github.com/mediar-ai/screenpipe) - 24/7 screen and mic recording with AI-powered search
- [Vibe](https://github.com/thewh1teagle/vibe) - Transcribe and analyze meetings
- [Page Assist](https://github.com/n4ze3m/page-assist) - Chrome extension for AI-powered browsing
- [NativeMind](https://github.com/NativeMindBrowser/NativeMindExtension) - Private, on-device browser AI assistant
- [Ollama Fortress](https://github.com/ParisNeo/ollama_proxy_server) - Security proxy for Ollama
- [1Panel](https://github.com/1Panel-dev/1Panel/) - Web-based Linux server management
- [Writeopia](https://github.com/Writeopia/Writeopia) - Text editor with Ollama integration
- [QA-Pilot](https://github.com/reid41/QA-Pilot) - GitHub code repository understanding
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama) - Ollama in Raycast
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) - Painting app with AI integrations
- [Serene Pub](https://github.com/doolijb/serene-pub) - AI roleplaying app
- [Mayan EDMS](https://gitlab.com/mayan-edms/mayan-edms) - Document management with Ollama workflows
- [TagSpaces](https://www.tagspaces.org) - File management with [AI tagging](https://docs.tagspaces.org/ai/)
### Observability & Monitoring
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) - Debug, evaluate, and monitor LLM applications
- [OpenLIT](https://github.com/openlit/openlit) - OpenTelemetry-native monitoring for Ollama and GPUs
- [Lunary](https://lunary.ai/docs/integrations/ollama) - LLM observability with analytics and PII masking
- [Langfuse](https://langfuse.com/docs/integrations/ollama) - Open source LLM observability
- [HoneyHive](https://docs.honeyhive.ai/integrations/ollama) - AI observability and evaluation for agents
- [MLflow Tracing](https://mlflow.org/docs/latest/llms/tracing/index.html#automatic-tracing) - Open source LLM observability
### Database & Embeddings
- [pgai](https://github.com/timescale/pgai) - PostgreSQL as a vector database ([guide](https://github.com/timescale/pgai/blob/main/docs/vectorizer-quick-start.md))
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) - Connect Ollama with 200+ data platforms
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) - Embeddable vector database for Go ([example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama))
- [Kangaroo](https://github.com/dbkangaroo/kangaroo) - AI-powered SQL client
### Infrastructure & Deployment
#### Cloud
- [Google Cloud](https://cloud.google.com/run/docs/tutorials/gpu-gemma2-with-ollama)
- [Fly.io](https://fly.io/docs/python/do-more/add-ollama/)
- [Koyeb](https://www.koyeb.com/deploy/ollama)
- [Harbor](https://github.com/av/harbor) - Containerized LLM toolkit with Ollama as default backend
### Tutorial
- [handy-ollama](https://github.com/datawhalechina/handy-ollama) (Chinese Tutorial for Ollama by [Datawhale ](https://github.com/datawhalechina) - China's Largest Open Source AI Learning Community)
### Terminal
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
- [neollama](https://github.com/paradoxical-dev/neollama) UI client for interacting with models from within Neovim
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
- [ollama-chat.nvim](https://github.com/gerazov/ollama-chat.nvim)
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
- [gptel Emacs client](https://github.com/karthink/gptel)
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
- [cmdh](https://github.com/pgibler/cmdh)
- [ooo](https://github.com/npahlfer/ooo)
- [shell-pilot](https://github.com/reid41/shell-pilot)(Interact with models via pure shell scripts on Linux or macOS)
- [tenere](https://github.com/pythops/tenere)
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
- [ShellOracle](https://github.com/djcopley/ShellOracle)
- [tlm](https://github.com/yusufcanb/tlm)
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
- [gollama](https://github.com/sammcj/gollama)
- [ParLlama](https://github.com/paulrobello/parllama)
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
- [Ollama Mixture of Experts (MOE) in 50 lines of code](https://github.com/rapidarchitect/ollama_moe)
- [vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
- [x-cmd ollama](https://x-cmd.com/mod/ollama)
- [bb7](https://github.com/drunkwcodes/bb7)
- [SwollamaCLI](https://github.com/marcusziade/Swollama) bundled with the Swollama Swift package. [Demo](https://github.com/marcusziade/Swollama?tab=readme-ov-file#cli-usage)
- [aichat](https://github.com/sigoden/aichat) All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
- [PowershAI](https://github.com/rrg92/powershai) PowerShell module that brings AI to terminal on Windows, including support for Ollama
- [DeepShell](https://github.com/Abyss-c0re/deepshell) Your self-hosted AI assistant. Interactive Shell, Files and Folders analysis.
- [orbiton](https://github.com/xyproto/orbiton) Configuration-free text editor and IDE with support for tab completion with Ollama.
- [orca-cli](https://github.com/molbal/orca-cli) Ollama Registry CLI Application - Browse, pull, and download models from Ollama Registry in your terminal.
- [GGUF-to-Ollama](https://github.com/jonathanhecl/gguf-to-ollama) - Importing GGUF to Ollama made easy (multiplatform)
- [AWS-Strands-With-Ollama](https://github.com/rapidarchitect/ollama_strands) - AWS Strands Agents with Ollama Examples
- [ollama-multirun](https://github.com/attogram/ollama-multirun) - A bash shell script to run a single prompt against any or all of your locally installed ollama models, saving the output and performance statistics as easily navigable web pages. ([Demo](https://attogram.github.io/ai_test_zone/))
- [ollama-bash-toolshed](https://github.com/attogram/ollama-bash-toolshed) - Bash scripts to chat with tool using models. Add new tools to your shed with ease. Runs on Ollama.
- [hle-eval-ollama](https://github.com/mags0ft/hle-eval-ollama) - Runs benchmarks like "Humanity's Last Exam" (HLE) on your favorite local Ollama models and evaluates the quality of their responses
- [VT Code](https://github.com/vinhnx/vtcode) - VT Code is a Rust-based terminal coding agent with semantic code intelligence via Tree-sitter. Ollama integration for running local/cloud models with configurable endpoints.
### Apple Vision Pro
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Cross-platform AI chat app supporting Apple Vision Pro via "Designed for iPad")
- [Enchanted](https://github.com/AugustDev/enchanted)
### Database
- [pgai](https://github.com/timescale/pgai) - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
- [Get started guide](https://github.com/timescale/pgai/blob/main/docs/vectorizer-quick-start.md)
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
- [Kangaroo](https://github.com/dbkangaroo/kangaroo) (AI-powered SQL client and admin tool for popular databases)
### Package managers
#### Package Managers
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
- [Homebrew](https://formulae.brew.sh/formula/ollama)
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
- [Nix package](https://search.nixos.org/packages?show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
- [Flox](https://flox.dev/blog/ollama-part-one)
### Libraries
- [LangChain](https://python.langchain.com/docs/integrations/chat/ollama/) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
- [crewAI](https://github.com/crewAIInc/crewAI)
- [Yacana](https://remembersoftwares.github.io/yacana/) (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
- [Strands Agents](https://github.com/strands-agents/sdk-python) (A model-driven approach to building AI agents in just a few lines of code)
- [Spring AI](https://github.com/spring-projects/spring-ai) with [reference](https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html) and [example](https://github.com/tzolov/ollama-tools)
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
- [LangChain for .NET](https://github.com/tryAGI/LangChain) with [example](https://github.com/tryAGI/LangChain/blob/main/examples/LangChain.Samples.OpenAI/Program.cs)
- [LLPhant](https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama)
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama)
- [LiteLLM](https://github.com/BerriAI/litellm)
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
- [Ollama for Ruby](https://github.com/crmne/ruby_llm)
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
- [Ollama-hpp for C++](https://github.com/jmont-dev/ollama-hpp)
- [Ollama4j for Java](https://github.com/ollama4j/ollama4j)
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)
- [Elixir LangChain](https://github.com/brainlid/langchain)
- [Ollama for R - rollama](https://github.com/JBGruber/rollama)
- [Ollama for R - ollama-r](https://github.com/hauselin/ollama-r)
- [Ollama-ex for Elixir](https://github.com/lebrunel/ollama-ex)
- [Ollama Connector for SAP ABAP](https://github.com/b-tocs/abap_btocs_ollama)
- [Testcontainers](https://testcontainers.com/modules/ollama/)
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
- [LlamaScript](https://github.com/Project-Llama/llamascript)
- [llm-axe](https://github.com/emirsahin1/llm-axe) (Python Toolkit for Building LLM Powered Apps)
- [Gollm](https://docs.gollm.co/examples/ollama-example)
- [Gollama for Golang](https://github.com/jonathanhecl/gollama)
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) with [example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama)
- [Parakeet](https://github.com/parakeet-nest/parakeet) is a GoLang library, made to simplify the development of small generative AI applications with Ollama.
- [Haverscript](https://github.com/andygill/haverscript) with [examples](https://github.com/andygill/haverscript/tree/main/examples)
- [Ollama for Swift](https://github.com/mattt/ollama-swift)
- [Swollama for Swift](https://github.com/guitaripod/Swollama) with [DocC](https://guitaripod.github.io/Swollama/documentation/swollama)
- [GoLamify](https://github.com/prasad89/golamify)
- [Ollama for Haskell](https://github.com/tusharad/ollama-haskell)
- [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to different LLM in a unified API)
- [LlmTornado](https://github.com/lofcz/llmtornado) (C# library providing a unified interface for major FOSS & Commercial inference APIs)
- [Ollama for Zig](https://github.com/dravenk/ollama-zig)
- [Abso](https://github.com/lunary-ai/abso) (OpenAI-compatible TypeScript SDK for any LLM provider)
- [Nichey](https://github.com/goodreasonai/nichey) is a Python package for generating custom wikis for your research topic
- [Ollama for D](https://github.com/kassane/ollama-d)
- [OllamaPlusPlus](https://github.com/HardCodeDev777/OllamaPlusPlus) (Very simple C++ library for Ollama)
- [any-llm](https://github.com/mozilla-ai/any-llm) (A single interface to use different llm providers by [mozilla.ai](https://www.mozilla.ai/))
- [any-agent](https://github.com/mozilla-ai/any-agent) (A single interface to use and evaluate different agent frameworks by [mozilla.ai](https://www.mozilla.ai/))
- [Neuro SAN](https://github.com/cognizant-ai-lab/neuro-san-studio) (Data-driven multi-agent orchestration framework) with [example](https://github.com/cognizant-ai-lab/neuro-san-studio/blob/main/docs/user_guide.md#ollama)
- [achatbot-go](https://github.com/ai-bot-pro/achatbot-go) a multimodal(text/audio/image) chatbot.
- [Ollama Bash Lib](https://github.com/attogram/ollama-bash-lib) - A Bash Library for Ollama. Run LLM prompts straight from your shell, and more
### Mobile
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Lightning-fast Cross-platform AI chat app with native UI for Android, iOS, and iPad)
- [Enchanted](https://github.com/AugustDev/enchanted)
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
- [Ollama Android Chat](https://github.com/sunshine0523/OllamaServer) (No need for Termux, start the Ollama service with one click on an Android device)
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
### Extensions & Plugins
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Vibe](https://github.com/thewh1teagle/vibe) (Transcribe and analyze meetings with Ollama)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
- [Cliobot](https://github.com/herval/cliobot) (Telegram bot with Ollama support)
- [Copilot for Obsidian plugin](https://github.com/logancyang/obsidian-copilot)
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) (Proxy that allows you to use Ollama as a copilot like GitHub Copilot)
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and Hugging Face)
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
- [Plasmoid Ollama Control](https://github.com/imoize/plasmoid-ollamacontrol) (KDE Plasma extension that allows you to quickly manage/control Ollama model)
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
- [ChatGPTBox: All in one browser extension](https://github.com/josStorer/chatGPTBox) with [Integrating Tutorial](https://github.com/josStorer/chatGPTBox/issues/616#issuecomment-1975186467)
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depend on ollama server)
- [Terraform AWS Ollama & Open WebUI](https://github.com/xuyangbocn/terraform-aws-self-host-llm) (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front-end Open WebUI service.)
- [node-red-contrib-ollama](https://github.com/jakubburkiewicz/node-red-contrib-ollama)
- [Local AI Helper](https://github.com/ivostoykov/localAI) (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
- [AI Summary Helper plugin](https://github.com/philffm/ai-summary-helper)
- [TextCraft](https://github.com/suncloudsmoon/TextCraft) (Copilot in Word alternative using Ollama)
- [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) (Alfred Workflow)
- [TextLLaMA](https://github.com/adarshM84/TextLLaMA) A Chrome Extension that helps you write emails, correct grammar, and translate into any language
- [Simple-Discord-AI](https://github.com/zyphixor/simple-discord-ai)
- [LLM Telegram Bot](https://github.com/innightwolfsleep/llm_telegram_bot) (telegram bot, primary for RP. Oobabooga-like buttons, [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) API integration e.t.c)
- [mcp-llm](https://github.com/sammcj/mcp-llm) (MCP Server to allow LLMs to call other LLMs)
- [SimpleOllamaUnity](https://github.com/HardCodeDev777/SimpleOllamaUnity) (Unity Engine extension for communicating with Ollama in a few lines of code. Also works at runtime)
- [UnityCodeLama](https://github.com/HardCodeDev777/UnityCodeLama) (Unity Editor tool to analyze scripts via Ollama)
- [NativeMind](https://github.com/NativeMindBrowser/NativeMindExtension) (Private, on-device AI Assistant, no cloud dependencies)
- [GMAI - Gradle Managed AI](https://gmai.premex.se/) (Gradle plugin for automated Ollama lifecycle management during build phases)
- [NOMYO Router](https://github.com/nomyo-ai/nomyo-router) (A transparent Ollama proxy with model deployment aware routing which auto-manages multiple Ollama instances in a given network)
### Supported backends
- [llama.cpp](https://github.com/ggml-org/llama.cpp) project founded by Georgi Gerganov.
### Observability
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native integration to Ollama.
- [Lunary](https://lunary.ai/docs/integrations/ollama) is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
- [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
- [HoneyHive](https://docs.honeyhive.ai/integrations/ollama) is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
- [Langfuse](https://langfuse.com/docs/integrations/ollama) is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
- [MLflow Tracing](https://mlflow.org/docs/latest/llms/tracing/index.html#automatic-tracing) is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.
### Security
- [Ollama Fortress](https://github.com/ParisNeo/ollama_proxy_server)
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)

View File

@@ -55,6 +55,43 @@ import (
"github.com/ollama/ollama/x/imagegen"
)
func init() {
// Override default selectors to use Bubbletea TUI instead of raw terminal I/O.
config.DefaultSingleSelector = func(title string, items []config.ModelItem) (string, error) {
tuiItems := tui.ReorderItems(tui.ConvertItems(items))
result, err := tui.SelectSingle(title, tuiItems)
if errors.Is(err, tui.ErrCancelled) {
return "", config.ErrCancelled
}
return result, err
}
config.DefaultMultiSelector = func(title string, items []config.ModelItem, preChecked []string) ([]string, error) {
tuiItems := tui.ReorderItems(tui.ConvertItems(items))
result, err := tui.SelectMultiple(title, tuiItems, preChecked)
if errors.Is(err, tui.ErrCancelled) {
return nil, config.ErrCancelled
}
return result, err
}
config.DefaultSignIn = func(modelName, signInURL string) (string, error) {
userName, err := tui.RunSignIn(modelName, signInURL)
if errors.Is(err, tui.ErrCancelled) {
return "", config.ErrCancelled
}
return userName, err
}
config.DefaultConfirmPrompt = func(prompt string) (bool, error) {
ok, err := tui.RunConfirm(prompt)
if errors.Is(err, tui.ErrCancelled) {
return false, config.ErrCancelled
}
return ok, err
}
}
const ConnectInstructions = "If your browser did not open, navigate to:\n %s\n\n"
// ensureThinkingSupport emits a warning if the model does not advertise thinking support
@@ -1848,18 +1885,15 @@ func runInteractiveTUI(cmd *cobra.Command) {
return
}
// errSelectionCancelled is returned when user cancels model selection
errSelectionCancelled := errors.New("cancelled")
// Selector adapters for tui
singleSelector := func(title string, items []config.ModelItem) (string, error) {
tuiItems := make([]tui.SelectItem, len(items))
for i, item := range items {
tuiItems[i] = tui.SelectItem{Name: item.Name, Description: item.Description}
tuiItems[i] = tui.SelectItem{Name: item.Name, Description: item.Description, Recommended: item.Recommended}
}
result, err := tui.SelectSingle(title, tuiItems)
if errors.Is(err, tui.ErrCancelled) {
return "", errSelectionCancelled
return "", config.ErrCancelled
}
return result, err
}
@@ -1867,11 +1901,11 @@ func runInteractiveTUI(cmd *cobra.Command) {
multiSelector := func(title string, items []config.ModelItem, preChecked []string) ([]string, error) {
tuiItems := make([]tui.SelectItem, len(items))
for i, item := range items {
tuiItems[i] = tui.SelectItem{Name: item.Name, Description: item.Description}
tuiItems[i] = tui.SelectItem{Name: item.Name, Description: item.Description, Recommended: item.Recommended}
}
result, err := tui.SelectMultiple(title, tuiItems, preChecked)
if errors.Is(err, tui.ErrCancelled) {
return nil, errSelectionCancelled
return nil, config.ErrCancelled
}
return result, err
}
@@ -1884,6 +1918,18 @@ func runInteractiveTUI(cmd *cobra.Command) {
}
runModel := func(modelName string) {
client, err := api.ClientFromEnvironment()
if err != nil {
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
return
}
if err := config.ShowOrPull(cmd.Context(), client, modelName); err != nil {
if errors.Is(err, config.ErrCancelled) {
return
}
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
return
}
_ = config.SetLastModel(modelName)
opts := runOptions{
Model: modelName,
@@ -1905,7 +1951,7 @@ func runInteractiveTUI(cmd *cobra.Command) {
configuredModel := config.IntegrationModel(name)
if configuredModel == "" || !config.ModelExists(cmd.Context(), configuredModel) {
err := config.ConfigureIntegrationWithSelectors(cmd.Context(), name, singleSelector, multiSelector)
if errors.Is(err, errSelectionCancelled) {
if errors.Is(err, config.ErrCancelled) {
return false // Return to main menu
}
if err != nil {
@@ -1925,13 +1971,11 @@ func runInteractiveTUI(cmd *cobra.Command) {
return
case tui.SelectionRunModel:
_ = config.SetLastSelection("run")
// Run last model directly if configured and still exists
if modelName := config.LastModel(); modelName != "" && config.ModelExists(cmd.Context(), modelName) {
if modelName := config.LastModel(); modelName != "" {
runModel(modelName)
} else {
// No last model or model no longer exists, show picker
modelName, err := config.SelectModelWithSelector(cmd.Context(), singleSelector)
if errors.Is(err, errSelectionCancelled) {
if errors.Is(err, config.ErrCancelled) {
continue // Return to main menu
}
if err != nil {
@@ -1947,7 +1991,7 @@ func runInteractiveTUI(cmd *cobra.Command) {
if modelName == "" {
var err error
modelName, err = config.SelectModelWithSelector(cmd.Context(), singleSelector)
if errors.Is(err, errSelectionCancelled) {
if errors.Is(err, config.ErrCancelled) {
continue // Return to main menu
}
if err != nil {
@@ -1963,9 +2007,17 @@ func runInteractiveTUI(cmd *cobra.Command) {
}
case tui.SelectionChangeIntegration:
_ = config.SetLastSelection(result.Integration)
// Use model from modal if selected, otherwise show picker
if result.Model != "" {
// Model already selected from modal - save and launch
if len(result.Models) > 0 {
// Multi-select from modal (Editor integrations)
if err := config.SaveAndEditIntegration(result.Integration, result.Models); err != nil {
fmt.Fprintf(os.Stderr, "Error configuring %s: %v\n", result.Integration, err)
continue
}
if err := config.LaunchIntegrationWithModel(result.Integration, result.Models[0]); err != nil {
fmt.Fprintf(os.Stderr, "Error launching %s: %v\n", result.Integration, err)
}
} else if result.Model != "" {
// Single-select from modal - save and launch
if err := config.SaveIntegrationModel(result.Integration, result.Model); err != nil {
fmt.Fprintf(os.Stderr, "Error saving config: %v\n", err)
continue
@@ -1975,7 +2027,7 @@ func runInteractiveTUI(cmd *cobra.Command) {
}
} else {
err := config.ConfigureIntegrationWithSelectors(cmd.Context(), result.Integration, singleSelector, multiSelector)
if errors.Is(err, errSelectionCancelled) {
if errors.Is(err, config.ErrCancelled) {
continue // Return to main menu
}
if err != nil {
@@ -2210,7 +2262,7 @@ func NewCLI() *cobra.Command {
switch cmd {
case runCmd:
imagegen.AppendFlagsDocs(cmd)
appendEnvDocs(cmd, []envconfig.EnvVar{envVars["OLLAMA_HOST"], envVars["OLLAMA_NOHISTORY"]})
appendEnvDocs(cmd, []envconfig.EnvVar{envVars["OLLAMA_EDITOR"], envVars["OLLAMA_HOST"], envVars["OLLAMA_NOHISTORY"]})
case serveCmd:
appendEnvDocs(cmd, []envconfig.EnvVar{
envVars["OLLAMA_DEBUG"],

View File

@@ -126,8 +126,7 @@ func (c *Claude) ConfigureAliases(ctx context.Context, model string, existingAli
fmt.Fprintf(os.Stderr, "\n%sModel Configuration%s\n\n", ansiBold, ansiReset)
if aliases["primary"] == "" || force {
primary, err := selectPrompt("Select model:", items)
fmt.Fprintf(os.Stderr, "\033[3A\033[J")
primary, err := DefaultSingleSelector("Select model:", items)
if err != nil {
return nil, false, err
}

View File

@@ -160,6 +160,15 @@ func IntegrationModel(appName string) string {
return ic.Models[0]
}
// IntegrationModels returns all configured models for an integration, or nil.
func IntegrationModels(appName string) []string {
ic, err := loadIntegration(appName)
if err != nil || len(ic.Models) == 0 {
return nil
}
return ic.Models
}
// LastModel returns the last model that was run, or empty string if none.
func LastModel() string {
cfg, err := load()

View File

@@ -4,12 +4,9 @@ import (
"context"
"errors"
"fmt"
"io"
"maps"
"net/http"
"os"
"os/exec"
"path/filepath"
"runtime"
"slices"
"strings"
@@ -66,30 +63,104 @@ var integrations = map[string]Runner{
// recommendedModels are shown when the user has no models or as suggestions.
// Order matters: local models first, then cloud models.
var recommendedModels = []ModelItem{
{Name: "glm-4.7-flash", Description: "Recommended (requires ~25GB VRAM)"},
{Name: "qwen3:8b", Description: "Recommended (requires ~11GB VRAM)"},
{Name: "glm-4.7:cloud", Description: "Recommended"},
{Name: "kimi-k2.5:cloud", Description: "Recommended"},
{Name: "minimax-m2.5:cloud", Description: "Fast, efficient coding and real-world productivity", Recommended: true},
{Name: "glm-5:cloud", Description: "Reasoning and code generation", Recommended: true},
{Name: "kimi-k2.5:cloud", Description: "Multimodal reasoning with subagents", Recommended: true},
{Name: "glm-4.7-flash", Description: "Reasoning and code generation locally", Recommended: true},
{Name: "qwen3:8b", Description: "Efficient all-purpose assistant", Recommended: true},
}
// cloudModelLimits maps cloud model base names to their token limits.
// TODO(parthsareen): grab context/output limits from model info instead of hardcoding
var cloudModelLimits = map[string]cloudModelLimit{
"minimax-m2.5": {Context: 204_800, Output: 128_000},
"cogito-2.1:671b": {Context: 163_840, Output: 65_536},
"deepseek-v3.1:671b": {Context: 163_840, Output: 163_840},
"deepseek-v3.2": {Context: 163_840, Output: 65_536},
"glm-4.6": {Context: 202_752, Output: 131_072},
"glm-4.7": {Context: 202_752, Output: 131_072},
"gpt-oss:120b": {Context: 131_072, Output: 131_072},
"gpt-oss:20b": {Context: 131_072, Output: 131_072},
"kimi-k2:1t": {Context: 262_144, Output: 262_144},
"kimi-k2.5": {Context: 262_144, Output: 262_144},
"kimi-k2-thinking": {Context: 262_144, Output: 262_144},
"nemotron-3-nano:30b": {Context: 1_048_576, Output: 131_072},
"qwen3-coder:480b": {Context: 262_144, Output: 65_536},
"qwen3-coder-next": {Context: 262_144, Output: 32_768},
"qwen3-next:80b": {Context: 262_144, Output: 32_768},
}
// recommendedVRAM maps local recommended models to their approximate VRAM requirement.
var recommendedVRAM = map[string]string{
"glm-4.7-flash": "~25GB",
"qwen3:8b": "~11GB",
}
// integrationAliases are hidden from the interactive selector but work as CLI arguments.
var integrationAliases = map[string]bool{
"clawdbot": true,
"moltbot": true,
"pi": true,
}
// integrationInstallURLs maps integration names to their install script URLs.
var integrationInstallURLs = map[string]string{
"claude": "https://claude.ai/install.sh",
"openclaw": "https://openclaw.ai/install.sh",
"droid": "https://app.factory.ai/cli",
"opencode": "https://opencode.ai/install",
// integrationInstallHints maps integration names to install URLs.
var integrationInstallHints = map[string]string{
"claude": "https://code.claude.com/docs/en/quickstart",
"openclaw": "https://docs.openclaw.ai",
"codex": "https://developers.openai.com/codex/cli/",
"droid": "https://docs.factory.ai/cli/getting-started/quickstart",
"opencode": "https://opencode.ai",
}
// CanInstallIntegration returns true if we have an install script for this integration.
func CanInstallIntegration(name string) bool {
_, ok := integrationInstallURLs[name]
return ok
// hyperlink wraps text in an OSC 8 terminal hyperlink so it is cmd+clickable.
func hyperlink(url, text string) string {
return fmt.Sprintf("\033]8;;%s\033\\%s\033]8;;\033\\", url, text)
}
// IntegrationInfo contains display information about a registered integration.
type IntegrationInfo struct {
Name string // registry key, e.g. "claude"
DisplayName string // human-readable, e.g. "Claude Code"
Description string // short description, e.g. "Anthropic's agentic coding tool"
}
// integrationDescriptions maps integration names to short descriptions.
var integrationDescriptions = map[string]string{
"claude": "Anthropic's coding tool with subagents",
"codex": "OpenAI's open-source coding agent",
"openclaw": "Personal AI with 100+ skills",
"droid": "Factory's coding agent across terminal and IDEs",
"opencode": "Anomaly's open-source coding agent",
}
// ListIntegrationInfos returns all non-alias registered integrations, sorted by name.
func ListIntegrationInfos() []IntegrationInfo {
var result []IntegrationInfo
for name, r := range integrations {
if integrationAliases[name] {
continue
}
result = append(result, IntegrationInfo{
Name: name,
DisplayName: r.String(),
Description: integrationDescriptions[name],
})
}
slices.SortFunc(result, func(a, b IntegrationInfo) int {
return strings.Compare(a.Name, b.Name)
})
return result
}
// IntegrationInstallHint returns a user-friendly install hint for the given integration,
// or an empty string if none is available. The URL is wrapped in an OSC 8 hyperlink
// so it is cmd+clickable in supported terminals.
func IntegrationInstallHint(name string) string {
url := integrationInstallHints[name]
if url == "" {
return ""
}
return "Install from " + hyperlink(url, url)
}
// IsIntegrationInstalled checks if an integration binary is installed.
@@ -121,48 +192,15 @@ func IsIntegrationInstalled(name string) bool {
}
}
// InstallIntegration downloads and runs the install script for an integration.
func InstallIntegration(name string) error {
url, ok := integrationInstallURLs[name]
// IsEditorIntegration returns true if the named integration uses multi-model
// selection (implements the Editor interface).
func IsEditorIntegration(name string) bool {
r, ok := integrations[strings.ToLower(name)]
if !ok {
return fmt.Errorf("no install script available for %s", name)
return false
}
// Download the install script
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("failed to download install script: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("failed to download install script: HTTP %d", resp.StatusCode)
}
script, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read install script: %w", err)
}
// Create a temporary file for the script
tmpDir := os.TempDir()
scriptPath := filepath.Join(tmpDir, fmt.Sprintf("install-%s.sh", name))
if err := os.WriteFile(scriptPath, script, 0o700); err != nil {
return fmt.Errorf("failed to write install script: %w", err)
}
defer os.Remove(scriptPath)
// Execute the script with bash
cmd := exec.Command("bash", scriptPath)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return fmt.Errorf("install script failed: %w", err)
}
return nil
_, isEditor := r.(Editor)
return isEditor
}
// SelectModel lets the user select a model to run.
@@ -170,6 +208,7 @@ func InstallIntegration(name string) error {
type ModelItem struct {
Name string
Description string
Recommended bool
}
// SingleSelector is a function type for single item selection.
@@ -207,27 +246,6 @@ func SelectModelWithSelector(ctx context.Context, selector SingleSelector) (stri
return "", fmt.Errorf("no models available, run 'ollama pull <model>' first")
}
// Sort with last model first, then existing models, then recommendations
slices.SortStableFunc(items, func(a, b ModelItem) int {
aIsLast := a.Name == lastModel
bIsLast := b.Name == lastModel
if aIsLast != bIsLast {
if aIsLast {
return -1
}
return 1
}
aExists := existingModels[a.Name]
bExists := existingModels[b.Name]
if aExists != bExists {
if aExists {
return -1
}
return 1
}
return strings.Compare(strings.ToLower(a.Name), strings.ToLower(b.Name))
})
selected, err := selector("Select model to run:", items)
if err != nil {
return "", err
@@ -235,15 +253,22 @@ func SelectModelWithSelector(ctx context.Context, selector SingleSelector) (stri
// If the selected model isn't installed, pull it first
if !existingModels[selected] {
msg := fmt.Sprintf("Download %s?", selected)
if ok, err := confirmPrompt(msg); err != nil {
return "", err
} else if !ok {
return "", errCancelled
}
fmt.Fprintf(os.Stderr, "\n")
if err := pullModel(ctx, client, selected); err != nil {
return "", fmt.Errorf("failed to pull %s: %w", selected, err)
if cloudModels[selected] {
// Cloud models only pull a small manifest; no confirmation needed
if err := pullModel(ctx, client, selected); err != nil {
return "", fmt.Errorf("failed to pull %s: %w", selected, err)
}
} else {
msg := fmt.Sprintf("Download %s?", selected)
if ok, err := confirmPrompt(msg); err != nil {
return "", err
} else if !ok {
return "", errCancelled
}
fmt.Fprintf(os.Stderr, "\n")
if err := pullModel(ctx, client, selected); err != nil {
return "", fmt.Errorf("failed to pull %s: %w", selected, err)
}
}
}
@@ -309,32 +334,30 @@ func SelectModelWithSelector(ctx context.Context, selector SingleSelector) (stri
}
func SelectModel(ctx context.Context) (string, error) {
return SelectModelWithSelector(ctx, defaultSingleSelector)
return SelectModelWithSelector(ctx, DefaultSingleSelector)
}
func defaultSingleSelector(title string, items []ModelItem) (string, error) {
selectItems := make([]selectItem, len(items))
for i, item := range items {
selectItems[i] = selectItem(item)
}
return selectPrompt(title, selectItems)
}
// DefaultSingleSelector is the default single-select implementation.
var DefaultSingleSelector SingleSelector
func defaultMultiSelector(title string, items []ModelItem, preChecked []string) ([]string, error) {
selectItems := make([]selectItem, len(items))
for i, item := range items {
selectItems[i] = selectItem(item)
}
return multiSelectPrompt(title, selectItems, preChecked)
}
// DefaultMultiSelector is the default multi-select implementation.
var DefaultMultiSelector MultiSelector
// DefaultSignIn provides a TUI-based sign-in flow.
// When set, ensureAuth uses it instead of plain text prompts.
// Returns the signed-in username or an error.
var DefaultSignIn func(modelName, signInURL string) (string, error)
func selectIntegration() (string, error) {
if DefaultSingleSelector == nil {
return "", fmt.Errorf("no selector configured")
}
if len(integrations) == 0 {
return "", fmt.Errorf("no integrations available")
}
names := slices.Sorted(maps.Keys(integrations))
var items []selectItem
var items []ModelItem
for _, name := range names {
if integrationAliases[name] {
continue
@@ -344,10 +367,10 @@ func selectIntegration() (string, error) {
if conn, err := loadIntegration(name); err == nil && len(conn.Models) > 0 {
description = fmt.Sprintf("%s (%s)", r.String(), conn.Models[0])
}
items = append(items, selectItem{Name: name, Description: description})
items = append(items, ModelItem{Name: name, Description: description})
}
return selectPrompt("Select integration:", items)
return DefaultSingleSelector("Select integration:", items)
}
// selectModelsWithSelectors lets the user select models for an integration using provided selectors.
@@ -448,11 +471,17 @@ func pullIfNeeded(ctx context.Context, client *api.Client, existingModels map[st
return nil
}
// showOrPull checks if a model exists via client.Show and offers to pull it if not found.
func showOrPull(ctx context.Context, client *api.Client, model string) error {
// TODO(parthsareen): pull this out to tui package
// ShowOrPull checks if a model exists via client.Show and offers to pull it if not found.
func ShowOrPull(ctx context.Context, client *api.Client, model string) error {
if _, err := client.Show(ctx, &api.ShowRequest{Model: model}); err == nil {
return nil
}
// Cloud models only pull a small manifest; skip the download confirmation
// TODO(parthsareen): consolidate with cloud config changes
if strings.HasSuffix(model, "cloud") {
return pullModel(ctx, client, model)
}
if ok, err := confirmPrompt(fmt.Sprintf("Download %s?", model)); err != nil {
return err
} else if !ok {
@@ -462,7 +491,7 @@ func showOrPull(ctx context.Context, client *api.Client, model string) error {
return pullModel(ctx, client, model)
}
func listModels(ctx context.Context) ([]selectItem, map[string]bool, map[string]bool, *api.Client, error) {
func listModels(ctx context.Context) ([]ModelItem, map[string]bool, map[string]bool, *api.Client, error) {
client, err := api.ClientFromEnvironment()
if err != nil {
return nil, nil, nil, nil, err
@@ -481,20 +510,26 @@ func listModels(ctx context.Context) ([]selectItem, map[string]bool, map[string]
})
}
modelItems, _, existingModels, cloudModels := buildModelList(existing, nil, "")
items, _, existingModels, cloudModels := buildModelList(existing, nil, "")
if len(modelItems) == 0 {
if len(items) == 0 {
return nil, nil, nil, nil, fmt.Errorf("no models available, run 'ollama pull <model>' first")
}
items := make([]selectItem, len(modelItems))
for i, mi := range modelItems {
items[i] = selectItem(mi)
}
return items, existingModels, cloudModels, client, nil
}
func OpenBrowser(url string) {
switch runtime.GOOS {
case "darwin":
_ = exec.Command("open", url).Start()
case "linux":
_ = exec.Command("xdg-open", url).Start()
case "windows":
_ = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
}
}
func ensureAuth(ctx context.Context, client *api.Client, cloudModels map[string]bool, selected []string) error {
var selectedCloudModels []string
for _, m := range selected {
@@ -517,6 +552,16 @@ func ensureAuth(ctx context.Context, client *api.Client, cloudModels map[string]
}
modelList := strings.Join(selectedCloudModels, ", ")
if DefaultSignIn != nil {
_, err := DefaultSignIn(modelList, aErr.SigninURL)
if err != nil {
return fmt.Errorf("%s requires sign in", modelList)
}
return nil
}
// Fallback: plain text sign-in flow
yes, err := confirmPrompt(fmt.Sprintf("sign in to use %s?", modelList))
if err != nil || !yes {
return fmt.Errorf("%s requires sign in", modelList)
@@ -524,14 +569,7 @@ func ensureAuth(ctx context.Context, client *api.Client, cloudModels map[string]
fmt.Fprintf(os.Stderr, "\nTo sign in, navigate to:\n %s\n\n", aErr.SigninURL)
switch runtime.GOOS {
case "darwin":
_ = exec.Command("open", aErr.SigninURL).Start()
case "linux":
_ = exec.Command("xdg-open", aErr.SigninURL).Start()
case "windows":
_ = exec.Command("rundll32", "url.dll,FileProtocolHandler", aErr.SigninURL).Start()
}
OpenBrowser(aErr.SigninURL)
spinnerFrames := []string{"|", "/", "-", "\\"}
frame := 0
@@ -564,7 +602,7 @@ func ensureAuth(ctx context.Context, client *api.Client, cloudModels map[string]
// selectModels lets the user select models for an integration using default selectors.
func selectModels(ctx context.Context, name, current string) ([]string, error) {
return selectModelsWithSelectors(ctx, name, current, defaultSingleSelector, defaultMultiSelector)
return selectModelsWithSelectors(ctx, name, current, DefaultSingleSelector, DefaultMultiSelector)
}
func runIntegration(name, modelName string, args []string) error {
@@ -607,8 +645,15 @@ func LaunchIntegration(name string) error {
}
// Try to use saved config
if config, err := loadIntegration(name); err == nil && len(config.Models) > 0 {
return runIntegration(name, config.Models[0], nil)
if ic, err := loadIntegration(name); err == nil && len(ic.Models) > 0 {
client, err := api.ClientFromEnvironment()
if err != nil {
return err
}
if err := ShowOrPull(context.Background(), client, ic.Models[0]); err != nil {
return err
}
return runIntegration(name, ic.Models[0], nil)
}
// No saved config - prompt user to run setup
@@ -617,6 +662,13 @@ func LaunchIntegration(name string) error {
// LaunchIntegrationWithModel launches the named integration with the specified model.
func LaunchIntegrationWithModel(name, modelName string) error {
client, err := api.ClientFromEnvironment()
if err != nil {
return err
}
if err := ShowOrPull(context.Background(), client, modelName); err != nil {
return err
}
return runIntegration(name, modelName, nil)
}
@@ -639,6 +691,24 @@ func SaveIntegrationModel(name, modelName string) error {
return saveIntegration(name, models)
}
// SaveAndEditIntegration saves the models for an Editor integration and runs its Edit method
// to write the integration's config files.
func SaveAndEditIntegration(name string, models []string) error {
r, ok := integrations[strings.ToLower(name)]
if !ok {
return fmt.Errorf("unknown integration: %s", name)
}
if err := saveIntegration(name, models); err != nil {
return fmt.Errorf("failed to save: %w", err)
}
if editor, isEditor := r.(Editor); isEditor {
if err := editor.Edit(models); err != nil {
return fmt.Errorf("setup failed: %w", err)
}
}
return nil
}
// ConfigureIntegrationWithSelectors allows the user to select/change the model for an integration using custom selectors.
func ConfigureIntegrationWithSelectors(ctx context.Context, name string, single SingleSelector, multi MultiSelector) error {
r, ok := integrations[name]
@@ -648,7 +718,7 @@ func ConfigureIntegrationWithSelectors(ctx context.Context, name string, single
models, err := selectModelsWithSelectors(ctx, name, "", single, multi)
if errors.Is(err, errCancelled) {
return nil
return errCancelled
}
if err != nil {
return err
@@ -688,7 +758,7 @@ func ConfigureIntegrationWithSelectors(ctx context.Context, name string, single
// ConfigureIntegration allows the user to select/change the model for an integration.
func ConfigureIntegration(ctx context.Context, name string) error {
return ConfigureIntegrationWithSelectors(ctx, name, defaultSingleSelector, defaultMultiSelector)
return ConfigureIntegrationWithSelectors(ctx, name, DefaultSingleSelector, DefaultMultiSelector)
}
// LaunchCmd returns the cobra command for launching integrations.
@@ -721,7 +791,7 @@ Examples:
Args: cobra.ArbitraryArgs,
PreRunE: checkServerHeartbeat,
RunE: func(cmd *cobra.Command, args []string) error {
// No args - run the main TUI (same as 'ollama')
// No args and no flags - show the full TUI (same as bare 'ollama')
if len(args) == 0 && modelFlag == "" && !configFlag {
runTUI(cmd)
return nil
@@ -776,7 +846,7 @@ Examples:
// Validate --model flag if provided
if modelFlag != "" {
if err := showOrPull(cmd.Context(), client, modelFlag); err != nil {
if err := ShowOrPull(cmd.Context(), client, modelFlag); err != nil {
if errors.Is(err, errCancelled) {
return nil
}
@@ -808,7 +878,7 @@ Examples:
if model != "" && modelFlag == "" {
if _, err := client.Show(cmd.Context(), &api.ShowRequest{Model: model}); err != nil {
fmt.Fprintf(os.Stderr, "%sConfigured model %q not found%s\n\n", ansiGray, model, ansiReset)
if err := showOrPull(cmd.Context(), client, model); err != nil {
if err := ShowOrPull(cmd.Context(), client, model); err != nil {
model = ""
}
}
@@ -858,7 +928,7 @@ Examples:
if err != nil {
return err
}
if err := showOrPull(cmd.Context(), client, modelFlag); err != nil {
if err := ShowOrPull(cmd.Context(), client, modelFlag); err != nil {
if errors.Is(err, errCancelled) {
return nil
}
@@ -953,8 +1023,10 @@ func buildModelList(existing []modelInfo, preChecked []string, current string) (
recommended := make(map[string]bool)
var hasLocalModel, hasCloudModel bool
recDesc := make(map[string]string)
for _, rec := range recommendedModels {
recommended[rec.Name] = true
recDesc[rec.Name] = rec.Description
}
for _, m := range existing {
@@ -967,10 +1039,7 @@ func buildModelList(existing []modelInfo, preChecked []string, current string) (
}
displayName := strings.TrimSuffix(m.Name, ":latest")
existingModels[displayName] = true
item := ModelItem{Name: displayName}
if recommended[displayName] {
item.Description = "recommended"
}
item := ModelItem{Name: displayName, Recommended: recommended[displayName], Description: recDesc[displayName]}
items = append(items, item)
}
@@ -1007,31 +1076,76 @@ func buildModelList(existing []modelInfo, preChecked []string, current string) (
for i := range items {
if !existingModels[items[i].Name] {
notInstalled[items[i].Name] = true
var parts []string
if items[i].Description != "" {
items[i].Description += ", install?"
} else {
items[i].Description = "install?"
parts = append(parts, items[i].Description)
}
if vram := recommendedVRAM[items[i].Name]; vram != "" {
parts = append(parts, vram)
}
parts = append(parts, "(not downloaded)")
items[i].Description = strings.Join(parts, ", ")
}
}
// Build a recommended rank map to preserve ordering within tiers.
recRank := make(map[string]int)
for i, rec := range recommendedModels {
recRank[rec.Name] = i + 1 // 1-indexed; 0 means not recommended
}
onlyLocal := hasLocalModel && !hasCloudModel
if hasLocalModel || hasCloudModel {
slices.SortStableFunc(items, func(a, b ModelItem) int {
ac, bc := checked[a.Name], checked[b.Name]
aNew, bNew := notInstalled[a.Name], notInstalled[b.Name]
aRec, bRec := recRank[a.Name] > 0, recRank[b.Name] > 0
aCloud, bCloud := cloudModels[a.Name], cloudModels[b.Name]
// Checked/pre-selected always first
if ac != bc {
if ac {
return -1
}
return 1
}
if !ac && !bc && aNew != bNew {
// Recommended above non-recommended
if aRec != bRec {
if aRec {
return -1
}
return 1
}
// Both recommended
if aRec && bRec {
if aCloud != bCloud {
if onlyLocal {
// Local before cloud when only local installed
if aCloud {
return 1
}
return -1
}
// Cloud before local in mixed case
if aCloud {
return -1
}
return 1
}
return recRank[a.Name] - recRank[b.Name]
}
// Both non-recommended: installed before not-installed
if aNew != bNew {
if aNew {
return 1
}
return -1
}
return strings.Compare(strings.ToLower(a.Name), strings.ToLower(b.Name))
})
}
@@ -1077,27 +1191,6 @@ func GetModelItems(ctx context.Context) ([]ModelItem, map[string]bool) {
items, _, existingModels, _ := buildModelList(existing, preChecked, lastModel)
// Sort with last model first, then existing models, then recommendations
slices.SortStableFunc(items, func(a, b ModelItem) int {
aIsLast := a.Name == lastModel
bIsLast := b.Name == lastModel
if aIsLast != bIsLast {
if aIsLast {
return -1
}
return 1
}
aExists := existingModels[a.Name]
bExists := existingModels[b.Name]
if aExists != bExists {
if aExists {
return -1
}
return 1
}
return strings.Compare(strings.ToLower(a.Name), strings.ToLower(b.Name))
})
return items, existingModels
}

View File

@@ -94,9 +94,7 @@ func TestLaunchCmd(t *testing.T) {
mockCheck := func(cmd *cobra.Command, args []string) error {
return nil
}
// Mock TUI function (not called in these tests)
mockTUI := func(cmd *cobra.Command) {}
cmd := LaunchCmd(mockCheck, mockTUI)
t.Run("command structure", func(t *testing.T) {
@@ -376,7 +374,7 @@ func TestParseArgs(t *testing.T) {
func TestIsCloudModel(t *testing.T) {
// isCloudModel now only uses Show API, so nil client always returns false
t.Run("nil client returns false", func(t *testing.T) {
models := []string{"glm-4.7:cloud", "kimi-k2.5:cloud", "local-model"}
models := []string{"glm-5:cloud", "kimi-k2.5:cloud", "local-model"}
for _, model := range models {
if isCloudModel(context.Background(), nil, model) {
t.Errorf("isCloudModel(%q) with nil client should return false", model)
@@ -396,14 +394,14 @@ func names(items []ModelItem) []string {
func TestBuildModelList_NoExistingModels(t *testing.T) {
items, _, _, _ := buildModelList(nil, nil, "")
want := []string{"glm-4.7-flash", "qwen3:8b", "glm-4.7:cloud", "kimi-k2.5:cloud"}
want := []string{"minimax-m2.5:cloud", "glm-5:cloud", "kimi-k2.5:cloud", "glm-4.7-flash", "qwen3:8b"}
if diff := cmp.Diff(want, names(items)); diff != "" {
t.Errorf("with no existing models, items should be recommended in order (-want +got):\n%s", diff)
}
for _, item := range items {
if !strings.HasSuffix(item.Description, "install?") {
t.Errorf("item %q should have description ending with 'install?', got %q", item.Name, item.Description)
if !strings.HasSuffix(item.Description, "(not downloaded)") {
t.Errorf("item %q should have description ending with '(not downloaded)', got %q", item.Name, item.Description)
}
}
}
@@ -417,31 +415,33 @@ func TestBuildModelList_OnlyLocalModels_CloudRecsAtBottom(t *testing.T) {
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
want := []string{"llama3.2", "qwen2.5", "glm-4.7-flash", "glm-4.7:cloud", "kimi-k2.5:cloud", "qwen3:8b"}
// Recommended pinned at top (local recs first, then cloud recs when only-local), then installed non-recs
want := []string{"glm-4.7-flash", "qwen3:8b", "minimax-m2.5:cloud", "glm-5:cloud", "kimi-k2.5:cloud", "llama3.2", "qwen2.5"}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("cloud recs should be at bottom (-want +got):\n%s", diff)
t.Errorf("recs pinned at top, local recs before cloud recs (-want +got):\n%s", diff)
}
}
func TestBuildModelList_BothCloudAndLocal_RegularSort(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "glm-4.7:cloud", Remote: true},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
want := []string{"glm-4.7:cloud", "llama3.2", "glm-4.7-flash", "kimi-k2.5:cloud", "qwen3:8b"}
// All recs pinned at top (cloud before local in mixed case), then non-recs
want := []string{"minimax-m2.5:cloud", "glm-5:cloud", "kimi-k2.5:cloud", "glm-4.7-flash", "qwen3:8b", "llama3.2"}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("mixed models should be alphabetical (-want +got):\n%s", diff)
t.Errorf("recs pinned at top, cloud recs first in mixed case (-want +got):\n%s", diff)
}
}
func TestBuildModelList_PreCheckedFirst(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "glm-4.7:cloud", Remote: true},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, []string{"llama3.2"}, "")
@@ -455,20 +455,20 @@ func TestBuildModelList_PreCheckedFirst(t *testing.T) {
func TestBuildModelList_ExistingRecommendedMarked(t *testing.T) {
existing := []modelInfo{
{Name: "glm-4.7-flash", Remote: false},
{Name: "glm-4.7:cloud", Remote: true},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, nil, "")
for _, item := range items {
switch item.Name {
case "glm-4.7-flash", "glm-4.7:cloud":
if strings.HasSuffix(item.Description, "install?") {
t.Errorf("installed recommended %q should not have 'install?' suffix, got %q", item.Name, item.Description)
case "glm-4.7-flash", "glm-5:cloud":
if strings.HasSuffix(item.Description, "(not downloaded)") {
t.Errorf("installed recommended %q should not have '(not downloaded)' suffix, got %q", item.Name, item.Description)
}
case "kimi-k2.5:cloud", "qwen3:8b":
if !strings.HasSuffix(item.Description, "install?") {
t.Errorf("non-installed recommended %q should have 'install?' suffix, got %q", item.Name, item.Description)
case "minimax-m2.5:cloud", "kimi-k2.5:cloud", "qwen3:8b":
if !strings.HasSuffix(item.Description, "(not downloaded)") {
t.Errorf("non-installed recommended %q should have '(not downloaded)' suffix, got %q", item.Name, item.Description)
}
}
}
@@ -477,17 +477,18 @@ func TestBuildModelList_ExistingRecommendedMarked(t *testing.T) {
func TestBuildModelList_ExistingCloudModelsNotPushedToBottom(t *testing.T) {
existing := []modelInfo{
{Name: "glm-4.7-flash", Remote: false},
{Name: "glm-4.7:cloud", Remote: true},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
// glm-4.7-flash and glm-4.7:cloud are installed so they sort normally;
// glm-4.7-flash and glm-5:cloud are installed so they sort normally;
// kimi-k2.5:cloud and qwen3:8b are not installed so they go to the bottom
want := []string{"glm-4.7-flash", "glm-4.7:cloud", "kimi-k2.5:cloud", "qwen3:8b"}
// All recs: cloud first in mixed case, then local, in rec order within each
want := []string{"minimax-m2.5:cloud", "glm-5:cloud", "kimi-k2.5:cloud", "glm-4.7-flash", "qwen3:8b"}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("existing cloud models should sort normally (-want +got):\n%s", diff)
t.Errorf("all recs, cloud first in mixed case (-want +got):\n%s", diff)
}
}
@@ -502,15 +503,16 @@ func TestBuildModelList_HasRecommendedCloudModel_OnlyNonInstalledAtBottom(t *tes
// kimi-k2.5:cloud is installed so it sorts normally;
// the rest of the recommendations are not installed so they go to the bottom
want := []string{"kimi-k2.5:cloud", "llama3.2", "glm-4.7-flash", "glm-4.7:cloud", "qwen3:8b"}
// All recs pinned at top (cloud first in mixed case), then non-recs
want := []string{"minimax-m2.5:cloud", "glm-5:cloud", "kimi-k2.5:cloud", "glm-4.7-flash", "qwen3:8b", "llama3.2"}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("only non-installed models should be at bottom (-want +got):\n%s", diff)
t.Errorf("recs pinned at top, cloud first in mixed case (-want +got):\n%s", diff)
}
for _, item := range items {
if !slices.Contains([]string{"kimi-k2.5:cloud", "llama3.2"}, item.Name) {
if !strings.HasSuffix(item.Description, "install?") {
t.Errorf("non-installed %q should have 'install?' suffix, got %q", item.Name, item.Description)
if !strings.HasSuffix(item.Description, "(not downloaded)") {
t.Errorf("non-installed %q should have '(not downloaded)' suffix, got %q", item.Name, item.Description)
}
}
}
@@ -552,7 +554,7 @@ func TestBuildModelList_LatestTagStripped(t *testing.T) {
func TestBuildModelList_ReturnsExistingAndCloudMaps(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "glm-4.7:cloud", Remote: true},
{Name: "glm-5:cloud", Remote: true},
}
_, _, existingModels, cloudModels := buildModelList(existing, nil, "")
@@ -560,15 +562,15 @@ func TestBuildModelList_ReturnsExistingAndCloudMaps(t *testing.T) {
if !existingModels["llama3.2"] {
t.Error("llama3.2 should be in existingModels")
}
if !existingModels["glm-4.7:cloud"] {
t.Error("glm-4.7:cloud should be in existingModels")
if !existingModels["glm-5:cloud"] {
t.Error("glm-5:cloud should be in existingModels")
}
if existingModels["glm-4.7-flash"] {
t.Error("glm-4.7-flash should not be in existingModels (it's a recommendation)")
}
if !cloudModels["glm-4.7:cloud"] {
t.Error("glm-4.7:cloud should be in cloudModels")
if !cloudModels["glm-5:cloud"] {
t.Error("glm-5:cloud should be in cloudModels")
}
if !cloudModels["kimi-k2.5:cloud"] {
t.Error("kimi-k2.5:cloud should be in cloudModels (recommended cloud)")
@@ -578,6 +580,101 @@ func TestBuildModelList_ReturnsExistingAndCloudMaps(t *testing.T) {
}
}
func TestBuildModelList_RecommendedFieldSet(t *testing.T) {
existing := []modelInfo{
{Name: "glm-4.7-flash", Remote: false},
{Name: "llama3.2:latest", Remote: false},
}
items, _, _, _ := buildModelList(existing, nil, "")
for _, item := range items {
switch item.Name {
case "glm-4.7-flash", "qwen3:8b", "glm-5:cloud", "kimi-k2.5:cloud":
if !item.Recommended {
t.Errorf("%q should have Recommended=true", item.Name)
}
case "llama3.2":
if item.Recommended {
t.Errorf("%q should have Recommended=false", item.Name)
}
}
}
}
func TestBuildModelList_MixedCase_CloudRecsFirst(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
// Cloud recs should sort before local recs in mixed case
cloudIdx := slices.Index(got, "glm-5:cloud")
localIdx := slices.Index(got, "glm-4.7-flash")
if cloudIdx > localIdx {
t.Errorf("cloud recs should be before local recs in mixed case, got %v", got)
}
}
func TestBuildModelList_OnlyLocal_LocalRecsFirst(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
}
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
// Local recs should sort before cloud recs in only-local case
localIdx := slices.Index(got, "glm-4.7-flash")
cloudIdx := slices.Index(got, "glm-5:cloud")
if localIdx > cloudIdx {
t.Errorf("local recs should be before cloud recs in only-local case, got %v", got)
}
}
func TestBuildModelList_RecsAboveNonRecs(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "custom-model", Remote: false},
}
items, _, _, _ := buildModelList(existing, nil, "")
got := names(items)
// All recommended models should appear before non-recommended installed models
lastRecIdx := -1
firstNonRecIdx := len(got)
for i, name := range got {
isRec := name == "glm-4.7-flash" || name == "qwen3:8b" || name == "minimax-m2.5:cloud" || name == "glm-5:cloud" || name == "kimi-k2.5:cloud"
if isRec && i > lastRecIdx {
lastRecIdx = i
}
if !isRec && i < firstNonRecIdx {
firstNonRecIdx = i
}
}
if lastRecIdx > firstNonRecIdx {
t.Errorf("all recs should be above non-recs, got %v", got)
}
}
func TestBuildModelList_CheckedBeforeRecs(t *testing.T) {
existing := []modelInfo{
{Name: "llama3.2:latest", Remote: false},
{Name: "glm-5:cloud", Remote: true},
}
items, _, _, _ := buildModelList(existing, []string{"llama3.2"}, "")
got := names(items)
if got[0] != "llama3.2" {
t.Errorf("checked model should be first even before recs, got %v", got)
}
}
func TestEditorIntegration_SavedConfigSkipsSelection(t *testing.T) {
tmpDir := t.TempDir()
setTestHome(t, tmpDir)
@@ -630,7 +727,7 @@ func TestShowOrPull_ModelExists(t *testing.T) {
u, _ := url.Parse(srv.URL)
client := api.NewClient(u, srv.Client())
err := showOrPull(context.Background(), client, "test-model")
err := ShowOrPull(context.Background(), client, "test-model")
if err != nil {
t.Errorf("showOrPull should return nil when model exists, got: %v", err)
}
@@ -647,7 +744,7 @@ func TestShowOrPull_ModelNotFound_NoTerminal(t *testing.T) {
client := api.NewClient(u, srv.Client())
// confirmPrompt will fail in test (no terminal), so showOrPull should return an error
err := showOrPull(context.Background(), client, "missing-model")
err := ShowOrPull(context.Background(), client, "missing-model")
if err == nil {
t.Error("showOrPull should return error when model not found and no terminal available")
}
@@ -672,12 +769,141 @@ func TestShowOrPull_ShowCalledWithCorrectModel(t *testing.T) {
u, _ := url.Parse(srv.URL)
client := api.NewClient(u, srv.Client())
_ = showOrPull(context.Background(), client, "qwen3:8b")
_ = ShowOrPull(context.Background(), client, "qwen3:8b")
if receivedModel != "qwen3:8b" {
t.Errorf("expected Show to be called with %q, got %q", "qwen3:8b", receivedModel)
}
}
func TestShowOrPull_ModelNotFound_ConfirmYes_Pulls(t *testing.T) {
// Set up hook so confirmPrompt doesn't need a terminal
oldHook := DefaultConfirmPrompt
DefaultConfirmPrompt = func(prompt string) (bool, error) {
if !strings.Contains(prompt, "missing-model") {
t.Errorf("expected prompt to contain model name, got %q", prompt)
}
return true, nil
}
defer func() { DefaultConfirmPrompt = oldHook }()
var pullCalled bool
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/show":
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, `{"error":"model not found"}`)
case "/api/pull":
pullCalled = true
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, `{"status":"success"}`)
default:
w.WriteHeader(http.StatusNotFound)
}
}))
defer srv.Close()
u, _ := url.Parse(srv.URL)
client := api.NewClient(u, srv.Client())
err := ShowOrPull(context.Background(), client, "missing-model")
if err != nil {
t.Errorf("ShowOrPull should succeed after pull, got: %v", err)
}
if !pullCalled {
t.Error("expected pull to be called when user confirms download")
}
}
func TestShowOrPull_ModelNotFound_ConfirmNo_Cancelled(t *testing.T) {
oldHook := DefaultConfirmPrompt
DefaultConfirmPrompt = func(prompt string) (bool, error) {
return false, ErrCancelled
}
defer func() { DefaultConfirmPrompt = oldHook }()
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/show":
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, `{"error":"model not found"}`)
case "/api/pull":
t.Error("pull should not be called when user declines")
default:
w.WriteHeader(http.StatusNotFound)
}
}))
defer srv.Close()
u, _ := url.Parse(srv.URL)
client := api.NewClient(u, srv.Client())
err := ShowOrPull(context.Background(), client, "missing-model")
if err == nil {
t.Error("ShowOrPull should return error when user declines")
}
}
func TestShowOrPull_CloudModel_SkipsConfirmation(t *testing.T) {
// Confirm prompt should NOT be called for cloud models
oldHook := DefaultConfirmPrompt
DefaultConfirmPrompt = func(prompt string) (bool, error) {
t.Error("confirm prompt should not be called for cloud models")
return false, nil
}
defer func() { DefaultConfirmPrompt = oldHook }()
var pullCalled bool
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/show":
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, `{"error":"model not found"}`)
case "/api/pull":
pullCalled = true
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, `{"status":"success"}`)
default:
w.WriteHeader(http.StatusNotFound)
}
}))
defer srv.Close()
u, _ := url.Parse(srv.URL)
client := api.NewClient(u, srv.Client())
err := ShowOrPull(context.Background(), client, "glm-5:cloud")
if err != nil {
t.Errorf("ShowOrPull should succeed for cloud model, got: %v", err)
}
if !pullCalled {
t.Error("expected pull to be called for cloud model without confirmation")
}
}
func TestConfirmPrompt_DelegatesToHook(t *testing.T) {
oldHook := DefaultConfirmPrompt
var hookCalled bool
DefaultConfirmPrompt = func(prompt string) (bool, error) {
hookCalled = true
if prompt != "test prompt?" {
t.Errorf("expected prompt %q, got %q", "test prompt?", prompt)
}
return true, nil
}
defer func() { DefaultConfirmPrompt = oldHook }()
ok, err := confirmPrompt("test prompt?")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if !ok {
t.Error("expected true from hook")
}
if !hookCalled {
t.Error("expected DefaultConfirmPrompt hook to be called")
}
}
func TestEnsureAuth_NoCloudModels(t *testing.T) {
// ensureAuth should be a no-op when no cloud models are selected
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -748,3 +974,283 @@ func TestEnsureAuth_SkipsWhenNoCloudSelected(t *testing.T) {
t.Error("whoami should not be called when no cloud models are selected")
}
}
func TestHyperlink(t *testing.T) {
tests := []struct {
name string
url string
text string
wantURL string
wantText string
}{
{
name: "basic link",
url: "https://example.com",
text: "click here",
wantURL: "https://example.com",
wantText: "click here",
},
{
name: "url with path",
url: "https://example.com/docs/install",
text: "install docs",
wantURL: "https://example.com/docs/install",
wantText: "install docs",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := hyperlink(tt.url, tt.text)
// Should contain OSC 8 escape sequences
if !strings.Contains(got, "\033]8;;") {
t.Error("should contain OSC 8 open sequence")
}
if !strings.Contains(got, tt.wantURL) {
t.Errorf("should contain URL %q", tt.wantURL)
}
if !strings.Contains(got, tt.wantText) {
t.Errorf("should contain text %q", tt.wantText)
}
// Should have closing OSC 8 sequence
wantSuffix := "\033]8;;\033\\"
if !strings.HasSuffix(got, wantSuffix) {
t.Error("should end with OSC 8 close sequence")
}
})
}
}
func TestIntegrationInstallHint(t *testing.T) {
tests := []struct {
name string
input string
wantEmpty bool
wantURL string
}{
{
name: "claude has hint",
input: "claude",
wantURL: "https://code.claude.com/docs/en/quickstart",
},
{
name: "codex has hint",
input: "codex",
wantURL: "https://developers.openai.com/codex/cli/",
},
{
name: "openclaw has hint",
input: "openclaw",
wantURL: "https://docs.openclaw.ai",
},
{
name: "unknown has no hint",
input: "unknown",
wantEmpty: true,
},
{
name: "empty name has no hint",
input: "",
wantEmpty: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := IntegrationInstallHint(tt.input)
if tt.wantEmpty {
if got != "" {
t.Errorf("expected empty hint, got %q", got)
}
return
}
if !strings.Contains(got, "Install from") {
t.Errorf("hint should start with 'Install from', got %q", got)
}
if !strings.Contains(got, tt.wantURL) {
t.Errorf("hint should contain URL %q, got %q", tt.wantURL, got)
}
// Should be a clickable hyperlink
if !strings.Contains(got, "\033]8;;") {
t.Error("hint URL should be wrapped in OSC 8 hyperlink")
}
})
}
}
func TestListIntegrationInfos(t *testing.T) {
infos := ListIntegrationInfos()
t.Run("excludes aliases", func(t *testing.T) {
for _, info := range infos {
if integrationAliases[info.Name] {
t.Errorf("alias %q should not appear in ListIntegrationInfos", info.Name)
}
}
})
t.Run("sorted by name", func(t *testing.T) {
for i := 1; i < len(infos); i++ {
if infos[i-1].Name >= infos[i].Name {
t.Errorf("not sorted: %q >= %q", infos[i-1].Name, infos[i].Name)
}
}
})
t.Run("all fields populated", func(t *testing.T) {
for _, info := range infos {
if info.Name == "" {
t.Error("Name should not be empty")
}
if info.DisplayName == "" {
t.Errorf("DisplayName for %q should not be empty", info.Name)
}
}
})
t.Run("includes known integrations", func(t *testing.T) {
known := map[string]bool{"claude": false, "codex": false, "opencode": false}
for _, info := range infos {
if _, ok := known[info.Name]; ok {
known[info.Name] = true
}
}
for name, found := range known {
if !found {
t.Errorf("expected %q in ListIntegrationInfos", name)
}
}
})
}
func TestBuildModelList_Descriptions(t *testing.T) {
t.Run("installed recommended has base description", func(t *testing.T) {
existing := []modelInfo{
{Name: "qwen3:8b", Remote: false},
}
items, _, _, _ := buildModelList(existing, nil, "")
for _, item := range items {
if item.Name == "qwen3:8b" {
if strings.HasSuffix(item.Description, "install?") {
t.Errorf("installed model should not have 'install?' suffix, got %q", item.Description)
}
if item.Description == "" {
t.Error("installed recommended model should have a description")
}
return
}
}
t.Error("qwen3:8b not found in items")
})
t.Run("not-installed local rec has VRAM in description", func(t *testing.T) {
items, _, _, _ := buildModelList(nil, nil, "")
for _, item := range items {
if item.Name == "qwen3:8b" {
if !strings.Contains(item.Description, "~11GB") {
t.Errorf("not-installed qwen3:8b should show VRAM hint, got %q", item.Description)
}
return
}
}
t.Error("qwen3:8b not found in items")
})
t.Run("installed local rec omits VRAM", func(t *testing.T) {
existing := []modelInfo{
{Name: "qwen3:8b", Remote: false},
}
items, _, _, _ := buildModelList(existing, nil, "")
for _, item := range items {
if item.Name == "qwen3:8b" {
if strings.Contains(item.Description, "~11GB") {
t.Errorf("installed qwen3:8b should not show VRAM hint, got %q", item.Description)
}
return
}
}
t.Error("qwen3:8b not found in items")
})
}
func TestLaunchIntegration_UnknownIntegration(t *testing.T) {
err := LaunchIntegration("nonexistent-integration")
if err == nil {
t.Fatal("expected error for unknown integration")
}
if !strings.Contains(err.Error(), "unknown integration") {
t.Errorf("error should mention 'unknown integration', got: %v", err)
}
}
func TestLaunchIntegration_NotConfigured(t *testing.T) {
tmpDir := t.TempDir()
setTestHome(t, tmpDir)
// Claude is a known integration but not configured in temp dir
err := LaunchIntegration("claude")
if err == nil {
t.Fatal("expected error when integration is not configured")
}
if !strings.Contains(err.Error(), "not configured") {
t.Errorf("error should mention 'not configured', got: %v", err)
}
}
func TestIsEditorIntegration(t *testing.T) {
tests := []struct {
name string
want bool
}{
{"droid", true},
{"opencode", true},
{"openclaw", true},
{"claude", false},
{"codex", false},
{"nonexistent", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := IsEditorIntegration(tt.name); got != tt.want {
t.Errorf("IsEditorIntegration(%q) = %v, want %v", tt.name, got, tt.want)
}
})
}
}
func TestIntegrationModels(t *testing.T) {
tmpDir := t.TempDir()
setTestHome(t, tmpDir)
t.Run("returns nil when not configured", func(t *testing.T) {
if got := IntegrationModels("droid"); got != nil {
t.Errorf("expected nil, got %v", got)
}
})
t.Run("returns all saved models", func(t *testing.T) {
if err := saveIntegration("droid", []string{"llama3.2", "qwen3:8b"}); err != nil {
t.Fatal(err)
}
got := IntegrationModels("droid")
want := []string{"llama3.2", "qwen3:8b"}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("IntegrationModels mismatch (-want +got):\n%s", diff)
}
})
}
func TestSaveAndEditIntegration_UnknownIntegration(t *testing.T) {
err := SaveAndEditIntegration("nonexistent", []string{"model"})
if err == nil {
t.Fatal("expected error for unknown integration")
}
if !strings.Contains(err.Error(), "unknown integration") {
t.Errorf("error should mention 'unknown integration', got: %v", err)
}
}

View File

@@ -24,25 +24,6 @@ type cloudModelLimit struct {
Output int
}
// cloudModelLimits maps cloud model base names to their token limits.
// TODO(parthsareen): grab context/output limits from model info instead of hardcoding
var cloudModelLimits = map[string]cloudModelLimit{
"cogito-2.1:671b": {Context: 163_840, Output: 65_536},
"deepseek-v3.1:671b": {Context: 163_840, Output: 163_840},
"deepseek-v3.2": {Context: 163_840, Output: 65_536},
"glm-4.6": {Context: 202_752, Output: 131_072},
"glm-4.7": {Context: 202_752, Output: 131_072},
"gpt-oss:120b": {Context: 131_072, Output: 131_072},
"gpt-oss:20b": {Context: 131_072, Output: 131_072},
"kimi-k2:1t": {Context: 262_144, Output: 262_144},
"kimi-k2.5": {Context: 262_144, Output: 262_144},
"kimi-k2-thinking": {Context: 262_144, Output: 262_144},
"nemotron-3-nano:30b": {Context: 1_048_576, Output: 131_072},
"qwen3-coder:480b": {Context: 262_144, Output: 65_536},
"qwen3-coder-next": {Context: 262_144, Output: 32_768},
"qwen3-next:80b": {Context: 262_144, Output: 32_768},
}
// lookupCloudModelLimit returns the token limits for a cloud model.
// It tries the exact name first, then strips the ":cloud" suffix.
func lookupCloudModelLimit(name string) (cloudModelLimit, bool) {

View File

@@ -3,475 +3,34 @@ package config
import (
"errors"
"fmt"
"io"
"os"
"strings"
"golang.org/x/term"
)
// ANSI escape sequences for terminal formatting.
const (
ansiHideCursor = "\033[?25l"
ansiShowCursor = "\033[?25h"
ansiBold = "\033[1m"
ansiReset = "\033[0m"
ansiGray = "\033[37m"
ansiGreen = "\033[32m"
ansiClearDown = "\033[J"
ansiBold = "\033[1m"
ansiReset = "\033[0m"
ansiGray = "\033[37m"
ansiGreen = "\033[32m"
)
const maxDisplayedItems = 10
// ErrCancelled is returned when the user cancels a selection.
var ErrCancelled = errors.New("cancelled")
var errCancelled = errors.New("cancelled")
// errCancelled is kept as an alias for backward compatibility within the package.
var errCancelled = ErrCancelled
type selectItem struct {
Name string
Description string
}
type inputEvent int
const (
eventNone inputEvent = iota
eventEnter
eventEscape
eventUp
eventDown
eventTab
eventBackspace
eventChar
)
type selectState struct {
items []selectItem
filter string
selected int
scrollOffset int
}
func newSelectState(items []selectItem) *selectState {
return &selectState{items: items}
}
func (s *selectState) filtered() []selectItem {
return filterItems(s.items, s.filter)
}
func (s *selectState) handleInput(event inputEvent, char byte) (done bool, result string, err error) {
filtered := s.filtered()
switch event {
case eventEnter:
if len(filtered) > 0 && s.selected < len(filtered) {
return true, filtered[s.selected].Name, nil
}
case eventEscape:
return true, "", errCancelled
case eventBackspace:
if len(s.filter) > 0 {
s.filter = s.filter[:len(s.filter)-1]
s.selected = 0
s.scrollOffset = 0
}
case eventUp:
if s.selected > 0 {
s.selected--
if s.selected < s.scrollOffset {
s.scrollOffset = s.selected
}
}
case eventDown:
if s.selected < len(filtered)-1 {
s.selected++
if s.selected >= s.scrollOffset+maxDisplayedItems {
s.scrollOffset = s.selected - maxDisplayedItems + 1
}
}
case eventChar:
s.filter += string(char)
s.selected = 0
s.scrollOffset = 0
}
return false, "", nil
}
type multiSelectState struct {
items []selectItem
itemIndex map[string]int
filter string
highlighted int
scrollOffset int
checked map[int]bool
checkOrder []int
focusOnButton bool
}
func newMultiSelectState(items []selectItem, preChecked []string) *multiSelectState {
s := &multiSelectState{
items: items,
itemIndex: make(map[string]int, len(items)),
checked: make(map[int]bool),
}
for i, item := range items {
s.itemIndex[item.Name] = i
}
for _, name := range preChecked {
if idx, ok := s.itemIndex[name]; ok {
s.checked[idx] = true
s.checkOrder = append(s.checkOrder, idx)
}
}
return s
}
func (s *multiSelectState) filtered() []selectItem {
return filterItems(s.items, s.filter)
}
func (s *multiSelectState) toggleItem() {
filtered := s.filtered()
if len(filtered) == 0 || s.highlighted >= len(filtered) {
return
}
item := filtered[s.highlighted]
origIdx := s.itemIndex[item.Name]
if s.checked[origIdx] {
delete(s.checked, origIdx)
for i, idx := range s.checkOrder {
if idx == origIdx {
s.checkOrder = append(s.checkOrder[:i], s.checkOrder[i+1:]...)
break
}
}
} else {
s.checked[origIdx] = true
s.checkOrder = append(s.checkOrder, origIdx)
}
}
func (s *multiSelectState) handleInput(event inputEvent, char byte) (done bool, result []string, err error) {
filtered := s.filtered()
switch event {
case eventEnter:
if s.focusOnButton && len(s.checkOrder) > 0 {
var res []string
for _, idx := range s.checkOrder {
res = append(res, s.items[idx].Name)
}
return true, res, nil
} else if !s.focusOnButton {
s.toggleItem()
}
case eventTab:
if len(s.checkOrder) > 0 {
s.focusOnButton = !s.focusOnButton
}
case eventEscape:
return true, nil, errCancelled
case eventBackspace:
if len(s.filter) > 0 {
s.filter = s.filter[:len(s.filter)-1]
s.highlighted = 0
s.scrollOffset = 0
s.focusOnButton = false
}
case eventUp:
if s.focusOnButton {
s.focusOnButton = false
} else if s.highlighted > 0 {
s.highlighted--
if s.highlighted < s.scrollOffset {
s.scrollOffset = s.highlighted
}
}
case eventDown:
if s.focusOnButton {
s.focusOnButton = false
} else if s.highlighted < len(filtered)-1 {
s.highlighted++
if s.highlighted >= s.scrollOffset+maxDisplayedItems {
s.scrollOffset = s.highlighted - maxDisplayedItems + 1
}
}
case eventChar:
s.filter += string(char)
s.highlighted = 0
s.scrollOffset = 0
s.focusOnButton = false
}
return false, nil, nil
}
func (s *multiSelectState) selectedCount() int {
return len(s.checkOrder)
}
// Terminal I/O handling
type terminalState struct {
fd int
oldState *term.State
}
func enterRawMode() (*terminalState, error) {
fd := int(os.Stdin.Fd())
oldState, err := term.MakeRaw(fd)
if err != nil {
return nil, err
}
fmt.Fprint(os.Stderr, ansiHideCursor)
return &terminalState{fd: fd, oldState: oldState}, nil
}
func (t *terminalState) restore() {
fmt.Fprint(os.Stderr, ansiShowCursor)
term.Restore(t.fd, t.oldState)
}
func clearLines(n int) {
if n > 0 {
fmt.Fprintf(os.Stderr, "\033[%dA", n)
fmt.Fprint(os.Stderr, ansiClearDown)
}
}
func parseInput(r io.Reader) (inputEvent, byte, error) {
buf := make([]byte, 3)
n, err := r.Read(buf)
if err != nil {
return 0, 0, err
}
switch {
case n == 1 && buf[0] == 13:
return eventEnter, 0, nil
case n == 1 && (buf[0] == 3 || buf[0] == 27):
return eventEscape, 0, nil
case n == 1 && buf[0] == 9:
return eventTab, 0, nil
case n == 1 && buf[0] == 127:
return eventBackspace, 0, nil
case n == 3 && buf[0] == 27 && buf[1] == 91 && buf[2] == 65:
return eventUp, 0, nil
case n == 3 && buf[0] == 27 && buf[1] == 91 && buf[2] == 66:
return eventDown, 0, nil
case n == 1 && buf[0] >= 32 && buf[0] < 127:
return eventChar, buf[0], nil
}
return eventNone, 0, nil
}
// Rendering
func renderSelect(w io.Writer, prompt string, s *selectState) int {
filtered := s.filtered()
if s.filter == "" {
fmt.Fprintf(w, "%s %sType to filter...%s\r\n", prompt, ansiGray, ansiReset)
} else {
fmt.Fprintf(w, "%s %s\r\n", prompt, s.filter)
}
lineCount := 1
if len(filtered) == 0 {
fmt.Fprintf(w, " %s(no matches)%s\r\n", ansiGray, ansiReset)
lineCount++
} else {
displayCount := min(len(filtered), maxDisplayedItems)
for i := range displayCount {
idx := s.scrollOffset + i
if idx >= len(filtered) {
break
}
item := filtered[idx]
prefix := " "
if idx == s.selected {
prefix = " " + ansiBold + "> "
}
if item.Description != "" {
fmt.Fprintf(w, "%s%s%s %s- %s%s\r\n", prefix, item.Name, ansiReset, ansiGray, item.Description, ansiReset)
} else {
fmt.Fprintf(w, "%s%s%s\r\n", prefix, item.Name, ansiReset)
}
lineCount++
}
if remaining := len(filtered) - s.scrollOffset - displayCount; remaining > 0 {
fmt.Fprintf(w, " %s... and %d more%s\r\n", ansiGray, remaining, ansiReset)
lineCount++
}
}
return lineCount
}
func renderMultiSelect(w io.Writer, prompt string, s *multiSelectState) int {
filtered := s.filtered()
if s.filter == "" {
fmt.Fprintf(w, "%s %sType to filter...%s\r\n", prompt, ansiGray, ansiReset)
} else {
fmt.Fprintf(w, "%s %s\r\n", prompt, s.filter)
}
lineCount := 1
if len(filtered) == 0 {
fmt.Fprintf(w, " %s(no matches)%s\r\n", ansiGray, ansiReset)
lineCount++
} else {
displayCount := min(len(filtered), maxDisplayedItems)
for i := range displayCount {
idx := s.scrollOffset + i
if idx >= len(filtered) {
break
}
item := filtered[idx]
origIdx := s.itemIndex[item.Name]
checkbox := "[ ]"
if s.checked[origIdx] {
checkbox = "[x]"
}
prefix := " "
suffix := ""
if idx == s.highlighted && !s.focusOnButton {
prefix = "> "
}
if len(s.checkOrder) > 0 && s.checkOrder[0] == origIdx {
suffix = " " + ansiGray + "(default)" + ansiReset
}
desc := ""
if item.Description != "" {
desc = " " + ansiGray + "- " + item.Description + ansiReset
}
if idx == s.highlighted && !s.focusOnButton {
fmt.Fprintf(w, " %s%s %s %s%s%s%s\r\n", ansiBold, prefix, checkbox, item.Name, ansiReset, desc, suffix)
} else {
fmt.Fprintf(w, " %s %s %s%s%s\r\n", prefix, checkbox, item.Name, desc, suffix)
}
lineCount++
}
if remaining := len(filtered) - s.scrollOffset - displayCount; remaining > 0 {
fmt.Fprintf(w, " %s... and %d more%s\r\n", ansiGray, remaining, ansiReset)
lineCount++
}
}
fmt.Fprintf(w, "\r\n")
lineCount++
count := s.selectedCount()
switch {
case count == 0:
fmt.Fprintf(w, " %sSelect at least one model.%s\r\n", ansiGray, ansiReset)
case s.focusOnButton:
fmt.Fprintf(w, " %s> [ Continue ]%s %s(%d selected)%s\r\n", ansiBold, ansiReset, ansiGray, count, ansiReset)
default:
fmt.Fprintf(w, " %s[ Continue ] (%d selected) - press Tab%s\r\n", ansiGray, count, ansiReset)
}
lineCount++
return lineCount
}
// selectPrompt prompts the user to select a single item from a list.
func selectPrompt(prompt string, items []selectItem) (string, error) {
if len(items) == 0 {
return "", fmt.Errorf("no items to select from")
}
ts, err := enterRawMode()
if err != nil {
return "", err
}
defer ts.restore()
state := newSelectState(items)
var lastLineCount int
render := func() {
clearLines(lastLineCount)
lastLineCount = renderSelect(os.Stderr, prompt, state)
}
render()
for {
event, char, err := parseInput(os.Stdin)
if err != nil {
return "", err
}
done, result, err := state.handleInput(event, char)
if done {
clearLines(lastLineCount)
if err != nil {
return "", err
}
return result, nil
}
render()
}
}
// multiSelectPrompt prompts the user to select multiple items from a list.
func multiSelectPrompt(prompt string, items []selectItem, preChecked []string) ([]string, error) {
if len(items) == 0 {
return nil, fmt.Errorf("no items to select from")
}
ts, err := enterRawMode()
if err != nil {
return nil, err
}
defer ts.restore()
state := newMultiSelectState(items, preChecked)
var lastLineCount int
render := func() {
clearLines(lastLineCount)
lastLineCount = renderMultiSelect(os.Stderr, prompt, state)
}
render()
for {
event, char, err := parseInput(os.Stdin)
if err != nil {
return nil, err
}
done, result, err := state.handleInput(event, char)
if done {
clearLines(lastLineCount)
if err != nil {
return nil, err
}
return result, nil
}
render()
}
}
// DefaultConfirmPrompt provides a TUI-based confirmation prompt.
// When set, confirmPrompt delegates to it instead of using raw terminal I/O.
var DefaultConfirmPrompt func(prompt string) (bool, error)
func confirmPrompt(prompt string) (bool, error) {
if DefaultConfirmPrompt != nil {
return DefaultConfirmPrompt(prompt)
}
fd := int(os.Stdin.Fd())
oldState, err := term.MakeRaw(fd)
if err != nil {
@@ -497,17 +56,3 @@ func confirmPrompt(prompt string) (bool, error) {
}
}
}
func filterItems(items []selectItem, filter string) []selectItem {
if filter == "" {
return items
}
var result []selectItem
filterLower := strings.ToLower(filter)
for _, item := range items {
if strings.Contains(strings.ToLower(item.Name), filterLower) {
result = append(result, item)
}
}
return result
}

View File

@@ -1,670 +1,9 @@
package config
import (
"bytes"
"strings"
"testing"
)
func TestFilterItems(t *testing.T) {
items := []selectItem{
{Name: "llama3.2:latest"},
{Name: "qwen2.5:7b"},
{Name: "deepseek-v3:cloud"},
{Name: "GPT-OSS:20b"},
}
t.Run("EmptyFilter_ReturnsAllItems", func(t *testing.T) {
result := filterItems(items, "")
if len(result) != len(items) {
t.Errorf("expected %d items, got %d", len(items), len(result))
}
})
t.Run("CaseInsensitive_UppercaseFilterMatchesLowercase", func(t *testing.T) {
result := filterItems(items, "LLAMA")
if len(result) != 1 || result[0].Name != "llama3.2:latest" {
t.Errorf("expected llama3.2:latest, got %v", result)
}
})
t.Run("CaseInsensitive_LowercaseFilterMatchesUppercase", func(t *testing.T) {
result := filterItems(items, "gpt")
if len(result) != 1 || result[0].Name != "GPT-OSS:20b" {
t.Errorf("expected GPT-OSS:20b, got %v", result)
}
})
t.Run("PartialMatch", func(t *testing.T) {
result := filterItems(items, "deep")
if len(result) != 1 || result[0].Name != "deepseek-v3:cloud" {
t.Errorf("expected deepseek-v3:cloud, got %v", result)
}
})
t.Run("NoMatch_ReturnsEmpty", func(t *testing.T) {
result := filterItems(items, "nonexistent")
if len(result) != 0 {
t.Errorf("expected 0 items, got %d", len(result))
}
})
}
func TestSelectState(t *testing.T) {
items := []selectItem{
{Name: "item1"},
{Name: "item2"},
{Name: "item3"},
}
t.Run("InitialState", func(t *testing.T) {
s := newSelectState(items)
if s.selected != 0 {
t.Errorf("expected selected=0, got %d", s.selected)
}
if s.filter != "" {
t.Errorf("expected empty filter, got %q", s.filter)
}
if s.scrollOffset != 0 {
t.Errorf("expected scrollOffset=0, got %d", s.scrollOffset)
}
})
t.Run("Enter_SelectsCurrentItem", func(t *testing.T) {
s := newSelectState(items)
done, result, err := s.handleInput(eventEnter, 0)
if !done || result != "item1" || err != nil {
t.Errorf("expected (true, item1, nil), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Enter_WithFilter_SelectsFilteredItem", func(t *testing.T) {
s := newSelectState(items)
s.filter = "item3"
done, result, err := s.handleInput(eventEnter, 0)
if !done || result != "item3" || err != nil {
t.Errorf("expected (true, item3, nil), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Enter_EmptyFilteredList_DoesNothing", func(t *testing.T) {
s := newSelectState(items)
s.filter = "nonexistent"
done, result, err := s.handleInput(eventEnter, 0)
if done || result != "" || err != nil {
t.Errorf("expected (false, '', nil), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Enter_EmptyFilteredList_EmptyFilter_DoesNothing", func(t *testing.T) {
s := newSelectState([]selectItem{})
done, result, err := s.handleInput(eventEnter, 0)
if done || result != "" || err != nil {
t.Errorf("expected (false, '', nil), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Escape_ReturnsCancelledError", func(t *testing.T) {
s := newSelectState(items)
done, result, err := s.handleInput(eventEscape, 0)
if !done || result != "" || err != errCancelled {
t.Errorf("expected (true, '', errCancelled), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Down_MovesSelection", func(t *testing.T) {
s := newSelectState(items)
s.handleInput(eventDown, 0)
if s.selected != 1 {
t.Errorf("expected selected=1, got %d", s.selected)
}
})
t.Run("Down_AtBottom_StaysAtBottom", func(t *testing.T) {
s := newSelectState(items)
s.selected = 2
s.handleInput(eventDown, 0)
if s.selected != 2 {
t.Errorf("expected selected=2 (stayed at bottom), got %d", s.selected)
}
})
t.Run("Up_MovesSelection", func(t *testing.T) {
s := newSelectState(items)
s.selected = 2
s.handleInput(eventUp, 0)
if s.selected != 1 {
t.Errorf("expected selected=1, got %d", s.selected)
}
})
t.Run("Up_AtTop_StaysAtTop", func(t *testing.T) {
s := newSelectState(items)
s.handleInput(eventUp, 0)
if s.selected != 0 {
t.Errorf("expected selected=0 (stayed at top), got %d", s.selected)
}
})
t.Run("Char_AppendsToFilter", func(t *testing.T) {
s := newSelectState(items)
s.handleInput(eventChar, 'i')
s.handleInput(eventChar, 't')
s.handleInput(eventChar, 'e')
s.handleInput(eventChar, 'm')
s.handleInput(eventChar, '2')
if s.filter != "item2" {
t.Errorf("expected filter='item2', got %q", s.filter)
}
filtered := s.filtered()
if len(filtered) != 1 || filtered[0].Name != "item2" {
t.Errorf("expected [item2], got %v", filtered)
}
})
t.Run("Char_ResetsSelectionToZero", func(t *testing.T) {
s := newSelectState(items)
s.selected = 2
s.handleInput(eventChar, 'x')
if s.selected != 0 {
t.Errorf("expected selected=0 after typing, got %d", s.selected)
}
})
t.Run("Backspace_RemovesLastFilterChar", func(t *testing.T) {
s := newSelectState(items)
s.filter = "test"
s.handleInput(eventBackspace, 0)
if s.filter != "tes" {
t.Errorf("expected filter='tes', got %q", s.filter)
}
})
t.Run("Backspace_EmptyFilter_DoesNothing", func(t *testing.T) {
s := newSelectState(items)
s.handleInput(eventBackspace, 0)
if s.filter != "" {
t.Errorf("expected filter='', got %q", s.filter)
}
})
t.Run("Backspace_ResetsSelectionToZero", func(t *testing.T) {
s := newSelectState(items)
s.filter = "test"
s.selected = 2
s.handleInput(eventBackspace, 0)
if s.selected != 0 {
t.Errorf("expected selected=0 after backspace, got %d", s.selected)
}
})
t.Run("Scroll_DownPastVisibleItems_ScrollsViewport", func(t *testing.T) {
// maxDisplayedItems is 10, so with 15 items we need to scroll
manyItems := make([]selectItem, 15)
for i := range manyItems {
manyItems[i] = selectItem{Name: string(rune('a' + i))}
}
s := newSelectState(manyItems)
// move down 12 times (past the 10-item viewport)
for range 12 {
s.handleInput(eventDown, 0)
}
if s.selected != 12 {
t.Errorf("expected selected=12, got %d", s.selected)
}
if s.scrollOffset != 3 {
t.Errorf("expected scrollOffset=3 (12-10+1), got %d", s.scrollOffset)
}
})
t.Run("Scroll_UpPastScrollOffset_ScrollsViewport", func(t *testing.T) {
manyItems := make([]selectItem, 15)
for i := range manyItems {
manyItems[i] = selectItem{Name: string(rune('a' + i))}
}
s := newSelectState(manyItems)
s.selected = 5
s.scrollOffset = 5
s.handleInput(eventUp, 0)
if s.selected != 4 {
t.Errorf("expected selected=4, got %d", s.selected)
}
if s.scrollOffset != 4 {
t.Errorf("expected scrollOffset=4, got %d", s.scrollOffset)
}
})
}
func TestMultiSelectState(t *testing.T) {
items := []selectItem{
{Name: "item1"},
{Name: "item2"},
{Name: "item3"},
}
t.Run("InitialState_NoPrechecked", func(t *testing.T) {
s := newMultiSelectState(items, nil)
if s.highlighted != 0 {
t.Errorf("expected highlighted=0, got %d", s.highlighted)
}
if s.selectedCount() != 0 {
t.Errorf("expected 0 selected, got %d", s.selectedCount())
}
if s.focusOnButton {
t.Error("expected focusOnButton=false initially")
}
})
t.Run("InitialState_WithPrechecked", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item2", "item3"})
if s.selectedCount() != 2 {
t.Errorf("expected 2 selected, got %d", s.selectedCount())
}
if !s.checked[1] || !s.checked[2] {
t.Error("expected item2 and item3 to be checked")
}
})
t.Run("Prechecked_PreservesSelectionOrder", func(t *testing.T) {
// order matters: first checked = default model
s := newMultiSelectState(items, []string{"item3", "item1"})
if len(s.checkOrder) != 2 {
t.Fatalf("expected 2 in checkOrder, got %d", len(s.checkOrder))
}
if s.checkOrder[0] != 2 || s.checkOrder[1] != 0 {
t.Errorf("expected checkOrder=[2,0] (item3 first), got %v", s.checkOrder)
}
})
t.Run("Prechecked_IgnoresInvalidNames", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1", "nonexistent"})
if s.selectedCount() != 1 {
t.Errorf("expected 1 selected (nonexistent ignored), got %d", s.selectedCount())
}
})
t.Run("Toggle_ChecksUncheckedItem", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.toggleItem()
if !s.checked[0] {
t.Error("expected item1 to be checked after toggle")
}
})
t.Run("Toggle_UnchecksCheckedItem", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
s.toggleItem()
if s.checked[0] {
t.Error("expected item1 to be unchecked after toggle")
}
})
t.Run("Toggle_RemovesFromCheckOrder", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1", "item2", "item3"})
s.highlighted = 1 // toggle item2
s.toggleItem()
if len(s.checkOrder) != 2 {
t.Fatalf("expected 2 in checkOrder, got %d", len(s.checkOrder))
}
// should be [0, 2] (item1, item3) with item2 removed
if s.checkOrder[0] != 0 || s.checkOrder[1] != 2 {
t.Errorf("expected checkOrder=[0,2], got %v", s.checkOrder)
}
})
t.Run("Enter_TogglesWhenNotOnButton", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.handleInput(eventEnter, 0)
if !s.checked[0] {
t.Error("expected item1 to be checked after enter")
}
})
t.Run("Enter_OnButton_ReturnsSelection", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item2", "item1"})
s.focusOnButton = true
done, result, err := s.handleInput(eventEnter, 0)
if !done || err != nil {
t.Errorf("expected done=true, err=nil, got done=%v, err=%v", done, err)
}
// result should preserve selection order
if len(result) != 2 || result[0] != "item2" || result[1] != "item1" {
t.Errorf("expected [item2, item1], got %v", result)
}
})
t.Run("Enter_OnButton_EmptySelection_DoesNothing", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.focusOnButton = true
done, result, err := s.handleInput(eventEnter, 0)
if done || result != nil || err != nil {
t.Errorf("expected (false, nil, nil), got (%v, %v, %v)", done, result, err)
}
})
t.Run("Tab_SwitchesToButton_WhenHasSelection", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
s.handleInput(eventTab, 0)
if !s.focusOnButton {
t.Error("expected focus on button after tab")
}
})
t.Run("Tab_DoesNothing_WhenNoSelection", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.handleInput(eventTab, 0)
if s.focusOnButton {
t.Error("tab should not focus button when nothing selected")
}
})
t.Run("Tab_TogglesButtonFocus", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
s.handleInput(eventTab, 0)
if !s.focusOnButton {
t.Error("expected focus on button after first tab")
}
s.handleInput(eventTab, 0)
if s.focusOnButton {
t.Error("expected focus back on list after second tab")
}
})
t.Run("Escape_ReturnsCancelledError", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
done, result, err := s.handleInput(eventEscape, 0)
if !done || result != nil || err != errCancelled {
t.Errorf("expected (true, nil, errCancelled), got (%v, %v, %v)", done, result, err)
}
})
t.Run("IsDefault_TrueForFirstChecked", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item2", "item1"})
if !(len(s.checkOrder) > 0 && s.checkOrder[0] == 1) {
t.Error("expected item2 (idx 1) to be default (first checked)")
}
if len(s.checkOrder) > 0 && s.checkOrder[0] == 0 {
t.Error("expected item1 (idx 0) to NOT be default")
}
})
t.Run("IsDefault_FalseWhenNothingChecked", func(t *testing.T) {
s := newMultiSelectState(items, nil)
if len(s.checkOrder) > 0 && s.checkOrder[0] == 0 {
t.Error("expected isDefault=false when nothing checked")
}
})
t.Run("Down_MovesHighlight", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.handleInput(eventDown, 0)
if s.highlighted != 1 {
t.Errorf("expected highlighted=1, got %d", s.highlighted)
}
})
t.Run("Up_MovesHighlight", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.highlighted = 1
s.handleInput(eventUp, 0)
if s.highlighted != 0 {
t.Errorf("expected highlighted=0, got %d", s.highlighted)
}
})
t.Run("Arrow_ReturnsFocusFromButton", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
s.focusOnButton = true
s.handleInput(eventDown, 0)
if s.focusOnButton {
t.Error("expected focus to return to list on arrow key")
}
})
t.Run("Char_AppendsToFilter", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.handleInput(eventChar, 'x')
if s.filter != "x" {
t.Errorf("expected filter='x', got %q", s.filter)
}
})
t.Run("Char_ResetsHighlightAndScroll", func(t *testing.T) {
manyItems := make([]selectItem, 15)
for i := range manyItems {
manyItems[i] = selectItem{Name: string(rune('a' + i))}
}
s := newMultiSelectState(manyItems, nil)
s.highlighted = 10
s.scrollOffset = 5
s.handleInput(eventChar, 'x')
if s.highlighted != 0 {
t.Errorf("expected highlighted=0, got %d", s.highlighted)
}
if s.scrollOffset != 0 {
t.Errorf("expected scrollOffset=0, got %d", s.scrollOffset)
}
})
t.Run("Backspace_RemovesLastFilterChar", func(t *testing.T) {
s := newMultiSelectState(items, nil)
s.filter = "test"
s.handleInput(eventBackspace, 0)
if s.filter != "tes" {
t.Errorf("expected filter='tes', got %q", s.filter)
}
})
t.Run("Backspace_RemovesFocusFromButton", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
s.filter = "x"
s.focusOnButton = true
s.handleInput(eventBackspace, 0)
if s.focusOnButton {
t.Error("expected focusOnButton=false after backspace")
}
})
}
func TestParseInput(t *testing.T) {
t.Run("Enter", func(t *testing.T) {
event, char, err := parseInput(bytes.NewReader([]byte{13}))
if err != nil || event != eventEnter || char != 0 {
t.Errorf("expected (eventEnter, 0, nil), got (%v, %v, %v)", event, char, err)
}
})
t.Run("Escape", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{27}))
if err != nil || event != eventEscape {
t.Errorf("expected eventEscape, got %v", event)
}
})
t.Run("CtrlC_TreatedAsEscape", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{3}))
if err != nil || event != eventEscape {
t.Errorf("expected eventEscape for Ctrl+C, got %v", event)
}
})
t.Run("Tab", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{9}))
if err != nil || event != eventTab {
t.Errorf("expected eventTab, got %v", event)
}
})
t.Run("Backspace", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{127}))
if err != nil || event != eventBackspace {
t.Errorf("expected eventBackspace, got %v", event)
}
})
t.Run("UpArrow", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{27, 91, 65}))
if err != nil || event != eventUp {
t.Errorf("expected eventUp, got %v", event)
}
})
t.Run("DownArrow", func(t *testing.T) {
event, _, err := parseInput(bytes.NewReader([]byte{27, 91, 66}))
if err != nil || event != eventDown {
t.Errorf("expected eventDown, got %v", event)
}
})
t.Run("PrintableChars", func(t *testing.T) {
tests := []struct {
name string
char byte
}{
{"lowercase", 'a'},
{"uppercase", 'Z'},
{"digit", '5'},
{"space", ' '},
{"tilde", '~'},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
event, char, err := parseInput(bytes.NewReader([]byte{tt.char}))
if err != nil || event != eventChar || char != tt.char {
t.Errorf("expected (eventChar, %q), got (%v, %q)", tt.char, event, char)
}
})
}
})
}
func TestRenderSelect(t *testing.T) {
items := []selectItem{
{Name: "item1", Description: "first item"},
{Name: "item2"},
}
t.Run("ShowsPromptAndItems", func(t *testing.T) {
s := newSelectState(items)
var buf bytes.Buffer
lineCount := renderSelect(&buf, "Select:", s)
output := buf.String()
if !strings.Contains(output, "Select:") {
t.Error("expected prompt in output")
}
if !strings.Contains(output, "item1") {
t.Error("expected item1 in output")
}
if !strings.Contains(output, "first item") {
t.Error("expected description in output")
}
if !strings.Contains(output, "item2") {
t.Error("expected item2 in output")
}
if lineCount != 3 { // 1 prompt + 2 items
t.Errorf("expected 3 lines, got %d", lineCount)
}
})
t.Run("EmptyFilteredList_ShowsNoMatches", func(t *testing.T) {
s := newSelectState(items)
s.filter = "xyz"
var buf bytes.Buffer
renderSelect(&buf, "Select:", s)
output := buf.String()
if !strings.Contains(output, "no matches") {
t.Errorf("expected 'no matches' message, got: %s", output)
}
})
t.Run("EmptyFilteredList_EmptyFilter_ShowsNoMatches", func(t *testing.T) {
s := newSelectState([]selectItem{})
var buf bytes.Buffer
renderSelect(&buf, "Select:", s)
if !strings.Contains(buf.String(), "no matches") {
t.Error("expected 'no matches' message for empty list with no filter")
}
})
t.Run("LongList_ShowsRemainingCount", func(t *testing.T) {
manyItems := make([]selectItem, 15)
for i := range manyItems {
manyItems[i] = selectItem{Name: string(rune('a' + i))}
}
s := newSelectState(manyItems)
var buf bytes.Buffer
renderSelect(&buf, "Select:", s)
// 15 items - 10 displayed = 5 more
if !strings.Contains(buf.String(), "5 more") {
t.Error("expected '5 more' indicator")
}
})
}
func TestRenderMultiSelect(t *testing.T) {
items := []selectItem{
{Name: "item1"},
{Name: "item2"},
}
t.Run("ShowsCheckboxes", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
var buf bytes.Buffer
renderMultiSelect(&buf, "Select:", s)
output := buf.String()
if !strings.Contains(output, "[x]") {
t.Error("expected checked checkbox [x]")
}
if !strings.Contains(output, "[ ]") {
t.Error("expected unchecked checkbox [ ]")
}
})
t.Run("ShowsDefaultMarker", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1"})
var buf bytes.Buffer
renderMultiSelect(&buf, "Select:", s)
if !strings.Contains(buf.String(), "(default)") {
t.Error("expected (default) marker for first checked item")
}
})
t.Run("ShowsSelectedCount", func(t *testing.T) {
s := newMultiSelectState(items, []string{"item1", "item2"})
var buf bytes.Buffer
renderMultiSelect(&buf, "Select:", s)
if !strings.Contains(buf.String(), "2 selected") {
t.Error("expected '2 selected' in output")
}
})
t.Run("NoSelection_ShowsHelperText", func(t *testing.T) {
s := newMultiSelectState(items, nil)
var buf bytes.Buffer
renderMultiSelect(&buf, "Select:", s)
if !strings.Contains(buf.String(), "Select at least one") {
t.Error("expected 'Select at least one' helper text")
}
})
}
func TestErrCancelled(t *testing.T) {
t.Run("NotNil", func(t *testing.T) {
if errCancelled == nil {
@@ -678,255 +17,3 @@ func TestErrCancelled(t *testing.T) {
}
})
}
// Edge case tests for selector.go
// TestSelectState_SingleItem verifies that single item list works without crash.
// List with only one item should still work.
func TestSelectState_SingleItem(t *testing.T) {
items := []selectItem{{Name: "only-one"}}
s := newSelectState(items)
// Down should do nothing (already at bottom)
s.handleInput(eventDown, 0)
if s.selected != 0 {
t.Errorf("down on single item: expected selected=0, got %d", s.selected)
}
// Up should do nothing (already at top)
s.handleInput(eventUp, 0)
if s.selected != 0 {
t.Errorf("up on single item: expected selected=0, got %d", s.selected)
}
// Enter should select the only item
done, result, err := s.handleInput(eventEnter, 0)
if !done || result != "only-one" || err != nil {
t.Errorf("enter on single item: expected (true, 'only-one', nil), got (%v, %q, %v)", done, result, err)
}
}
// TestSelectState_ExactlyMaxItems verifies boundary condition at maxDisplayedItems.
// List with exactly maxDisplayedItems items should not scroll.
func TestSelectState_ExactlyMaxItems(t *testing.T) {
items := make([]selectItem, maxDisplayedItems)
for i := range items {
items[i] = selectItem{Name: string(rune('a' + i))}
}
s := newSelectState(items)
// Move to last item
for range maxDisplayedItems - 1 {
s.handleInput(eventDown, 0)
}
if s.selected != maxDisplayedItems-1 {
t.Errorf("expected selected=%d, got %d", maxDisplayedItems-1, s.selected)
}
// Should not scroll when exactly at max
if s.scrollOffset != 0 {
t.Errorf("expected scrollOffset=0 for exactly maxDisplayedItems, got %d", s.scrollOffset)
}
// One more down should do nothing
s.handleInput(eventDown, 0)
if s.selected != maxDisplayedItems-1 {
t.Errorf("down at max: expected selected=%d, got %d", maxDisplayedItems-1, s.selected)
}
}
// TestFilterItems_RegexSpecialChars verifies that filter is literal, not regex.
// User typing "model.v1" shouldn't match "modelsv1".
func TestFilterItems_RegexSpecialChars(t *testing.T) {
items := []selectItem{
{Name: "model.v1"},
{Name: "modelsv1"},
{Name: "model-v1"},
}
// Filter with dot should only match literal dot
result := filterItems(items, "model.v1")
if len(result) != 1 {
t.Errorf("expected 1 exact match, got %d", len(result))
}
if len(result) > 0 && result[0].Name != "model.v1" {
t.Errorf("expected 'model.v1', got %s", result[0].Name)
}
// Other regex special chars should be literal too
items2 := []selectItem{
{Name: "test[0]"},
{Name: "test0"},
{Name: "test(1)"},
}
result2 := filterItems(items2, "test[0]")
if len(result2) != 1 || result2[0].Name != "test[0]" {
t.Errorf("expected only 'test[0]', got %v", result2)
}
}
// TestMultiSelectState_DuplicateNames documents handling of duplicate item names.
// itemIndex uses name as key - duplicates cause collision. This documents
// the current behavior: the last index for a duplicate name is stored
func TestMultiSelectState_DuplicateNames(t *testing.T) {
// Duplicate names - this is an edge case that shouldn't happen in practice
items := []selectItem{
{Name: "duplicate"},
{Name: "duplicate"},
{Name: "unique"},
}
s := newMultiSelectState(items, nil)
// DOCUMENTED BEHAVIOR: itemIndex maps name to LAST index
// When there are duplicates, only the last occurrence's index is stored
if s.itemIndex["duplicate"] != 1 {
t.Errorf("itemIndex should map 'duplicate' to last index (1), got %d", s.itemIndex["duplicate"])
}
// Toggle item at highlighted=0 (first "duplicate")
// Due to name collision, toggleItem uses itemIndex["duplicate"] = 1
// So it actually toggles the SECOND duplicate item, not the first
s.toggleItem()
// This documents the potentially surprising behavior:
// We toggled at highlighted=0, but itemIndex lookup returned 1
if !s.checked[1] {
t.Error("toggle should check index 1 (due to name collision in itemIndex)")
}
if s.checked[0] {
t.Log("Note: index 0 is NOT checked, even though highlighted=0 (name collision behavior)")
}
}
// TestSelectState_FilterReducesBelowSelection verifies selection resets when filter reduces list.
// Prevents index-out-of-bounds on next keystroke
func TestSelectState_FilterReducesBelowSelection(t *testing.T) {
items := []selectItem{
{Name: "apple"},
{Name: "banana"},
{Name: "cherry"},
}
s := newSelectState(items)
s.selected = 2 // Select "cherry"
// Type a filter that removes cherry from results
s.handleInput(eventChar, 'a') // Filter to "a" - matches "apple" and "banana"
// Selection should reset to 0
if s.selected != 0 {
t.Errorf("expected selected=0 after filter, got %d", s.selected)
}
filtered := s.filtered()
if len(filtered) != 2 {
t.Errorf("expected 2 filtered items, got %d", len(filtered))
}
}
// TestFilterItems_UnicodeCharacters verifies filtering works with UTF-8.
// Model names might contain unicode characters
func TestFilterItems_UnicodeCharacters(t *testing.T) {
items := []selectItem{
{Name: "llama-日本語"},
{Name: "模型-chinese"},
{Name: "émoji-🦙"},
{Name: "regular-model"},
}
t.Run("filter japanese", func(t *testing.T) {
result := filterItems(items, "日本")
if len(result) != 1 || result[0].Name != "llama-日本語" {
t.Errorf("expected llama-日本語, got %v", result)
}
})
t.Run("filter chinese", func(t *testing.T) {
result := filterItems(items, "模型")
if len(result) != 1 || result[0].Name != "模型-chinese" {
t.Errorf("expected 模型-chinese, got %v", result)
}
})
t.Run("filter emoji", func(t *testing.T) {
result := filterItems(items, "🦙")
if len(result) != 1 || result[0].Name != "émoji-🦙" {
t.Errorf("expected émoji-🦙, got %v", result)
}
})
t.Run("filter accented char", func(t *testing.T) {
result := filterItems(items, "émoji")
if len(result) != 1 || result[0].Name != "émoji-🦙" {
t.Errorf("expected émoji-🦙, got %v", result)
}
})
}
// TestMultiSelectState_FilterReducesBelowHighlight verifies highlight resets when filter reduces list.
func TestMultiSelectState_FilterReducesBelowHighlight(t *testing.T) {
items := []selectItem{
{Name: "apple"},
{Name: "banana"},
{Name: "cherry"},
}
s := newMultiSelectState(items, nil)
s.highlighted = 2 // Highlight "cherry"
// Type a filter that removes cherry
s.handleInput(eventChar, 'a')
if s.highlighted != 0 {
t.Errorf("expected highlighted=0 after filter, got %d", s.highlighted)
}
}
// TestMultiSelectState_EmptyItems verifies handling of empty item list.
// Empty list should be handled gracefully.
func TestMultiSelectState_EmptyItems(t *testing.T) {
s := newMultiSelectState([]selectItem{}, nil)
// Toggle should not panic on empty list
s.toggleItem()
if s.selectedCount() != 0 {
t.Errorf("expected 0 selected for empty list, got %d", s.selectedCount())
}
// Render should handle empty list
var buf bytes.Buffer
lineCount := renderMultiSelect(&buf, "Select:", s)
if lineCount == 0 {
t.Error("renderMultiSelect should produce output even for empty list")
}
if !strings.Contains(buf.String(), "no matches") {
t.Error("expected 'no matches' for empty list")
}
}
// TestSelectState_RenderWithDescriptions verifies rendering items with descriptions.
func TestSelectState_RenderWithDescriptions(t *testing.T) {
items := []selectItem{
{Name: "item1", Description: "First item description"},
{Name: "item2", Description: ""},
{Name: "item3", Description: "Third item"},
}
s := newSelectState(items)
var buf bytes.Buffer
renderSelect(&buf, "Select:", s)
output := buf.String()
if !strings.Contains(output, "First item description") {
t.Error("expected description to be rendered")
}
if !strings.Contains(output, "item2") {
t.Error("expected item without description to be rendered")
}
}

5
cmd/editor_unix.go Normal file
View File

@@ -0,0 +1,5 @@
//go:build !windows
package cmd
const defaultEditor = "vi"

5
cmd/editor_windows.go Normal file
View File

@@ -0,0 +1,5 @@
//go:build windows
package cmd
const defaultEditor = "edit"

View File

@@ -7,6 +7,7 @@ import (
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"regexp"
"slices"
@@ -79,6 +80,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
fmt.Fprintln(os.Stderr, " Ctrl + w Delete the word before the cursor")
fmt.Fprintln(os.Stderr, "")
fmt.Fprintln(os.Stderr, " Ctrl + l Clear the screen")
fmt.Fprintln(os.Stderr, " Ctrl + g Open default editor to compose a prompt")
fmt.Fprintln(os.Stderr, " Ctrl + c Stop the model from responding")
fmt.Fprintln(os.Stderr, " Ctrl + d Exit ollama (/bye)")
fmt.Fprintln(os.Stderr, "")
@@ -147,6 +149,18 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
scanner.Prompt.UseAlt = false
sb.Reset()
continue
case errors.Is(err, readline.ErrEditPrompt):
sb.Reset()
content, err := editInExternalEditor(line)
if err != nil {
fmt.Fprintf(os.Stderr, "error: %v\n", err)
continue
}
if strings.TrimSpace(content) == "" {
continue
}
scanner.Prefill = content
continue
case err != nil:
return err
@@ -598,6 +612,57 @@ func extractFileData(input string) (string, []api.ImageData, error) {
return strings.TrimSpace(input), imgs, nil
}
func editInExternalEditor(content string) (string, error) {
editor := envconfig.Editor()
if editor == "" {
editor = os.Getenv("VISUAL")
}
if editor == "" {
editor = os.Getenv("EDITOR")
}
if editor == "" {
editor = defaultEditor
}
// Check that the editor binary exists
name := strings.Fields(editor)[0]
if _, err := exec.LookPath(name); err != nil {
return "", fmt.Errorf("editor %q not found, set OLLAMA_EDITOR to the path of your preferred editor", name)
}
tmpFile, err := os.CreateTemp("", "ollama-prompt-*.txt")
if err != nil {
return "", fmt.Errorf("creating temp file: %w", err)
}
defer os.Remove(tmpFile.Name())
if content != "" {
if _, err := tmpFile.WriteString(content); err != nil {
tmpFile.Close()
return "", fmt.Errorf("writing to temp file: %w", err)
}
}
tmpFile.Close()
args := strings.Fields(editor)
args = append(args, tmpFile.Name())
cmd := exec.Command(args[0], args[1:]...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return "", fmt.Errorf("editor exited with error: %w", err)
}
data, err := os.ReadFile(tmpFile.Name())
if err != nil {
return "", fmt.Errorf("reading temp file: %w", err)
}
return strings.TrimRight(string(data), "\n"), nil
}
func getImageData(filePath string) ([]byte, error) {
file, err := os.Open(filePath)
if err != nil {

109
cmd/tui/confirm.go Normal file
View File

@@ -0,0 +1,109 @@
package tui
import (
"fmt"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
)
var (
confirmActiveStyle = lipgloss.NewStyle().
Bold(true).
Background(lipgloss.AdaptiveColor{Light: "254", Dark: "236"})
confirmInactiveStyle = lipgloss.NewStyle().
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"})
)
type confirmModel struct {
prompt string
yes bool
confirmed bool
cancelled bool
width int
}
func (m confirmModel) Init() tea.Cmd {
return nil
}
func (m confirmModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.WindowSizeMsg:
wasSet := m.width > 0
m.width = msg.Width
if wasSet {
return m, tea.EnterAltScreen
}
return m, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "esc", "n":
m.cancelled = true
return m, tea.Quit
case "y":
m.yes = true
m.confirmed = true
return m, tea.Quit
case "enter":
m.confirmed = true
return m, tea.Quit
case "left", "h":
m.yes = true
case "right", "l":
m.yes = false
case "tab":
m.yes = !m.yes
}
}
return m, nil
}
func (m confirmModel) View() string {
if m.confirmed || m.cancelled {
return ""
}
var yesBtn, noBtn string
if m.yes {
yesBtn = confirmActiveStyle.Render(" Yes ")
noBtn = confirmInactiveStyle.Render(" No ")
} else {
yesBtn = confirmInactiveStyle.Render(" Yes ")
noBtn = confirmActiveStyle.Render(" No ")
}
s := selectorTitleStyle.Render(m.prompt) + "\n\n"
s += " " + yesBtn + " " + noBtn + "\n\n"
s += selectorHelpStyle.Render("←/→ navigate • enter confirm • esc cancel")
if m.width > 0 {
return lipgloss.NewStyle().MaxWidth(m.width).Render(s)
}
return s
}
// RunConfirm shows a bubbletea yes/no confirmation prompt.
// Returns true if the user confirmed, false if cancelled.
func RunConfirm(prompt string) (bool, error) {
m := confirmModel{
prompt: prompt,
yes: true, // default to yes
}
p := tea.NewProgram(m)
finalModel, err := p.Run()
if err != nil {
return false, fmt.Errorf("error running confirm: %w", err)
}
fm := finalModel.(confirmModel)
if fm.cancelled {
return false, ErrCancelled
}
return fm.yes, nil
}

208
cmd/tui/confirm_test.go Normal file
View File

@@ -0,0 +1,208 @@
package tui
import (
"strings"
"testing"
tea "github.com/charmbracelet/bubbletea"
)
func TestConfirmModel_DefaultsToYes(t *testing.T) {
m := confirmModel{prompt: "Download test?", yes: true}
if !m.yes {
t.Error("should default to yes")
}
}
func TestConfirmModel_View_ContainsPrompt(t *testing.T) {
m := confirmModel{prompt: "Download qwen3:8b?", yes: true}
got := m.View()
if !strings.Contains(got, "Download qwen3:8b?") {
t.Error("should contain the prompt text")
}
}
func TestConfirmModel_View_ContainsButtons(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
got := m.View()
if !strings.Contains(got, "Yes") {
t.Error("should contain Yes button")
}
if !strings.Contains(got, "No") {
t.Error("should contain No button")
}
}
func TestConfirmModel_View_ContainsHelp(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
got := m.View()
if !strings.Contains(got, "enter confirm") {
t.Error("should contain help text")
}
}
func TestConfirmModel_View_ClearsAfterConfirm(t *testing.T) {
m := confirmModel{prompt: "Download?", confirmed: true}
if m.View() != "" {
t.Error("View should return empty string after confirmation")
}
}
func TestConfirmModel_View_ClearsAfterCancel(t *testing.T) {
m := confirmModel{prompt: "Download?", cancelled: true}
if m.View() != "" {
t.Error("View should return empty string after cancellation")
}
}
func TestConfirmModel_EnterConfirmsYes(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyEnter})
fm := updated.(confirmModel)
if !fm.confirmed {
t.Error("enter should set confirmed=true")
}
if !fm.yes {
t.Error("enter with yes selected should keep yes=true")
}
if cmd == nil {
t.Error("enter should return tea.Quit")
}
}
func TestConfirmModel_EnterConfirmsNo(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: false}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyEnter})
fm := updated.(confirmModel)
if !fm.confirmed {
t.Error("enter should set confirmed=true")
}
if fm.yes {
t.Error("enter with no selected should keep yes=false")
}
if cmd == nil {
t.Error("enter should return tea.Quit")
}
}
func TestConfirmModel_EscCancels(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyEsc})
fm := updated.(confirmModel)
if !fm.cancelled {
t.Error("esc should set cancelled=true")
}
if cmd == nil {
t.Error("esc should return tea.Quit")
}
}
func TestConfirmModel_CtrlCCancels(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyCtrlC})
fm := updated.(confirmModel)
if !fm.cancelled {
t.Error("ctrl+c should set cancelled=true")
}
if cmd == nil {
t.Error("ctrl+c should return tea.Quit")
}
}
func TestConfirmModel_NCancels(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'n'}})
fm := updated.(confirmModel)
if !fm.cancelled {
t.Error("'n' should set cancelled=true")
}
if cmd == nil {
t.Error("'n' should return tea.Quit")
}
}
func TestConfirmModel_YConfirmsYes(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: false}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'y'}})
fm := updated.(confirmModel)
if !fm.confirmed {
t.Error("'y' should set confirmed=true")
}
if !fm.yes {
t.Error("'y' should set yes=true")
}
if cmd == nil {
t.Error("'y' should return tea.Quit")
}
}
func TestConfirmModel_ArrowKeysNavigate(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
// Right moves to No
updated, _ := m.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'l'}})
fm := updated.(confirmModel)
if fm.yes {
t.Error("right/l should move to No")
}
if fm.confirmed || fm.cancelled {
t.Error("navigation should not confirm or cancel")
}
// Left moves back to Yes
updated, _ = fm.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'h'}})
fm = updated.(confirmModel)
if !fm.yes {
t.Error("left/h should move to Yes")
}
}
func TestConfirmModel_TabToggles(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true}
updated, _ := m.Update(tea.KeyMsg{Type: tea.KeyTab})
fm := updated.(confirmModel)
if fm.yes {
t.Error("tab should toggle from Yes to No")
}
updated, _ = fm.Update(tea.KeyMsg{Type: tea.KeyTab})
fm = updated.(confirmModel)
if !fm.yes {
t.Error("tab should toggle from No to Yes")
}
}
func TestConfirmModel_WindowSizeUpdatesWidth(t *testing.T) {
m := confirmModel{prompt: "Download?"}
updated, _ := m.Update(tea.WindowSizeMsg{Width: 100, Height: 40})
fm := updated.(confirmModel)
if fm.width != 100 {
t.Errorf("expected width 100, got %d", fm.width)
}
}
func TestConfirmModel_ResizeEntersAltScreen(t *testing.T) {
m := confirmModel{prompt: "Download?", width: 80}
_, cmd := m.Update(tea.WindowSizeMsg{Width: 100, Height: 40})
if cmd == nil {
t.Error("resize (width already set) should return a command")
}
}
func TestConfirmModel_InitialWindowSizeNoAltScreen(t *testing.T) {
m := confirmModel{prompt: "Download?"}
_, cmd := m.Update(tea.WindowSizeMsg{Width: 80, Height: 40})
if cmd != nil {
t.Error("initial WindowSizeMsg should not return a command")
}
}
func TestConfirmModel_ViewMaxWidth(t *testing.T) {
m := confirmModel{prompt: "Download?", yes: true, width: 40}
got := m.View()
// Just ensure it doesn't panic and returns content
if got == "" {
t.Error("View with width set should still return content")
}
}

View File

@@ -7,6 +7,7 @@ import (
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"github.com/ollama/ollama/cmd/config"
)
var (
@@ -18,35 +19,38 @@ var (
selectorSelectedItemStyle = lipgloss.NewStyle().
PaddingLeft(2).
Bold(true)
Bold(true).
Background(lipgloss.AdaptiveColor{Light: "254", Dark: "236"})
selectorDescStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241"))
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"})
selectorDescLineStyle = selectorDescStyle.
PaddingLeft(6)
selectorFilterStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241")).
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"}).
Italic(true)
selectorInputStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("252"))
selectorCheckboxStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241"))
selectorCheckboxCheckedStyle = lipgloss.NewStyle().
Bold(true)
Foreground(lipgloss.AdaptiveColor{Light: "235", Dark: "252"})
selectorDefaultTagStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241")).
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"}).
Italic(true)
selectorHelpStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241"))
Foreground(lipgloss.AdaptiveColor{Light: "244", Dark: "244"})
selectorMoreStyle = lipgloss.NewStyle().
PaddingLeft(4).
Foreground(lipgloss.Color("241")).
PaddingLeft(6).
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"}).
Italic(true)
sectionHeaderStyle = lipgloss.NewStyle().
PaddingLeft(2).
Bold(true).
Foreground(lipgloss.AdaptiveColor{Light: "240", Dark: "249"})
)
const maxSelectorItems = 10
@@ -54,10 +58,34 @@ const maxSelectorItems = 10
// ErrCancelled is returned when the user cancels the selection.
var ErrCancelled = errors.New("cancelled")
// SelectItem represents an item that can be selected.
type SelectItem struct {
Name string
Description string
Recommended bool
}
// ConvertItems converts config.ModelItem slice to SelectItem slice.
func ConvertItems(items []config.ModelItem) []SelectItem {
out := make([]SelectItem, len(items))
for i, item := range items {
out[i] = SelectItem{Name: item.Name, Description: item.Description, Recommended: item.Recommended}
}
return out
}
// ReorderItems returns a copy with recommended items first, then non-recommended,
// preserving relative order within each group. This ensures the data order matches
// the visual section layout (Recommended / More).
func ReorderItems(items []SelectItem) []SelectItem {
var rec, other []SelectItem
for _, item := range items {
if item.Recommended {
rec = append(rec, item)
} else {
other = append(other, item)
}
}
return append(rec, other...)
}
// selectorModel is the bubbletea model for single selection.
@@ -69,6 +97,8 @@ type selectorModel struct {
scrollOffset int
selected string
cancelled bool
helpText string
width int
}
func (m selectorModel) filteredItems() []SelectItem {
@@ -89,83 +119,153 @@ func (m selectorModel) Init() tea.Cmd {
return nil
}
// otherStart returns the index of the first non-recommended item in the filtered list.
// When filtering, all items scroll together so this returns 0.
func (m selectorModel) otherStart() int {
if m.filter != "" {
return 0
}
filtered := m.filteredItems()
for i, item := range filtered {
if !item.Recommended {
return i
}
}
return len(filtered)
}
// updateNavigation handles navigation keys (up/down/pgup/pgdown/filter/backspace).
// It does NOT handle Enter, Esc, or CtrlC. This is used by both the standalone
// selector and the TUI modal (which intercepts Enter/Esc for its own logic).
func (m *selectorModel) updateNavigation(msg tea.KeyMsg) {
filtered := m.filteredItems()
otherStart := m.otherStart()
switch msg.Type {
case tea.KeyUp:
if m.cursor > 0 {
m.cursor--
m.updateScroll(otherStart)
}
case tea.KeyDown:
if m.cursor < len(filtered)-1 {
m.cursor++
m.updateScroll(otherStart)
}
case tea.KeyPgUp:
m.cursor -= maxSelectorItems
if m.cursor < 0 {
m.cursor = 0
}
m.updateScroll(otherStart)
case tea.KeyPgDown:
m.cursor += maxSelectorItems
if m.cursor >= len(filtered) {
m.cursor = len(filtered) - 1
}
m.updateScroll(otherStart)
case tea.KeyBackspace:
if len(m.filter) > 0 {
m.filter = m.filter[:len(m.filter)-1]
m.cursor = 0
m.scrollOffset = 0
}
case tea.KeyRunes:
m.filter += string(msg.Runes)
m.cursor = 0
m.scrollOffset = 0
}
}
// updateScroll adjusts scrollOffset based on cursor position.
// When not filtering, scrollOffset is relative to the "More" (non-recommended) section.
// When filtering, it's relative to the full filtered list.
func (m *selectorModel) updateScroll(otherStart int) {
if m.filter != "" {
if m.cursor < m.scrollOffset {
m.scrollOffset = m.cursor
}
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
return
}
// Cursor is in recommended section — reset "More" scroll to top
if m.cursor < otherStart {
m.scrollOffset = 0
return
}
// Cursor is in "More" section — scroll relative to others
posInOthers := m.cursor - otherStart
maxOthers := maxSelectorItems - otherStart
if maxOthers < 3 {
maxOthers = 3
}
if posInOthers < m.scrollOffset {
m.scrollOffset = posInOthers
}
if posInOthers >= m.scrollOffset+maxOthers {
m.scrollOffset = posInOthers - maxOthers + 1
}
}
func (m selectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
filtered := m.filteredItems()
case tea.WindowSizeMsg:
wasSet := m.width > 0
m.width = msg.Width
if wasSet {
return m, tea.EnterAltScreen
}
return m, nil
case tea.KeyMsg:
switch msg.Type {
case tea.KeyCtrlC, tea.KeyEsc:
m.cancelled = true
return m, tea.Quit
case tea.KeyEnter:
filtered := m.filteredItems()
if len(filtered) > 0 && m.cursor < len(filtered) {
m.selected = filtered[m.cursor].Name
}
return m, tea.Quit
case tea.KeyUp:
if m.cursor > 0 {
m.cursor--
if m.cursor < m.scrollOffset {
m.scrollOffset = m.cursor
}
}
case tea.KeyDown:
if m.cursor < len(filtered)-1 {
m.cursor++
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
}
case tea.KeyPgUp:
m.cursor -= maxSelectorItems
if m.cursor < 0 {
m.cursor = 0
}
m.scrollOffset -= maxSelectorItems
if m.scrollOffset < 0 {
m.scrollOffset = 0
}
case tea.KeyPgDown:
m.cursor += maxSelectorItems
if m.cursor >= len(filtered) {
m.cursor = len(filtered) - 1
}
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
case tea.KeyBackspace:
if len(m.filter) > 0 {
m.filter = m.filter[:len(m.filter)-1]
m.cursor = 0
m.scrollOffset = 0
}
case tea.KeyRunes:
m.filter += string(msg.Runes)
m.cursor = 0
m.scrollOffset = 0
default:
m.updateNavigation(msg)
}
}
return m, nil
}
func (m selectorModel) View() string {
// Clear screen when exiting
if m.cancelled || m.selected != "" {
return ""
func (m selectorModel) renderItem(s *strings.Builder, item SelectItem, idx int) {
if idx == m.cursor {
s.WriteString(selectorSelectedItemStyle.Render("▸ " + item.Name))
} else {
s.WriteString(selectorItemStyle.Render(item.Name))
}
s.WriteString("\n")
if item.Description != "" {
s.WriteString(selectorDescLineStyle.Render(item.Description))
s.WriteString("\n")
}
}
// renderContent renders the selector content (title, items, help text) without
// checking the cancelled/selected state. This is used by both View() (standalone mode)
// and by the TUI modal which embeds a selectorModel.
func (m selectorModel) renderContent() string {
var s strings.Builder
// Title with filter
s.WriteString(selectorTitleStyle.Render(m.title))
s.WriteString(" ")
if m.filter == "" {
@@ -180,42 +280,91 @@ func (m selectorModel) View() string {
if len(filtered) == 0 {
s.WriteString(selectorItemStyle.Render(selectorDescStyle.Render("(no matches)")))
s.WriteString("\n")
} else {
displayCount := min(len(filtered), maxSelectorItems)
} else if m.filter != "" {
s.WriteString(sectionHeaderStyle.Render("Top Results"))
s.WriteString("\n")
displayCount := min(len(filtered), maxSelectorItems)
for i := range displayCount {
idx := m.scrollOffset + i
if idx >= len(filtered) {
break
}
item := filtered[idx]
if idx == m.cursor {
s.WriteString(selectorSelectedItemStyle.Render("▸ " + item.Name))
} else {
s.WriteString(selectorItemStyle.Render(item.Name))
}
if item.Description != "" {
s.WriteString(" ")
s.WriteString(selectorDescStyle.Render("- " + item.Description))
}
s.WriteString("\n")
m.renderItem(&s, filtered[idx], idx)
}
if remaining := len(filtered) - m.scrollOffset - displayCount; remaining > 0 {
s.WriteString(selectorMoreStyle.Render(fmt.Sprintf("... and %d more", remaining)))
s.WriteString("\n")
}
} else {
// Split into pinned recommended and scrollable others
var recItems, otherItems []int
for i, item := range filtered {
if item.Recommended {
recItems = append(recItems, i)
} else {
otherItems = append(otherItems, i)
}
}
// Always render all recommended items (pinned)
if len(recItems) > 0 {
s.WriteString(sectionHeaderStyle.Render("Recommended"))
s.WriteString("\n")
for _, idx := range recItems {
m.renderItem(&s, filtered[idx], idx)
}
}
if len(otherItems) > 0 {
s.WriteString("\n")
s.WriteString(sectionHeaderStyle.Render("More"))
s.WriteString("\n")
maxOthers := maxSelectorItems - len(recItems)
if maxOthers < 3 {
maxOthers = 3
}
displayCount := min(len(otherItems), maxOthers)
for i := range displayCount {
idx := m.scrollOffset + i
if idx >= len(otherItems) {
break
}
m.renderItem(&s, filtered[otherItems[idx]], otherItems[idx])
}
if remaining := len(otherItems) - m.scrollOffset - displayCount; remaining > 0 {
s.WriteString(selectorMoreStyle.Render(fmt.Sprintf("... and %d more", remaining)))
s.WriteString("\n")
}
}
}
s.WriteString("\n")
s.WriteString(selectorHelpStyle.Render("↑/↓ navigate • enter select • esc cancel"))
help := "↑/↓ navigate • enter select • esc cancel"
if m.helpText != "" {
help = m.helpText
}
s.WriteString(selectorHelpStyle.Render(help))
return s.String()
}
// SelectSingle prompts the user to select a single item from a list.
func (m selectorModel) View() string {
if m.cancelled || m.selected != "" {
return ""
}
s := m.renderContent()
if m.width > 0 {
return lipgloss.NewStyle().MaxWidth(m.width).Render(s)
}
return s
}
func SelectSingle(title string, items []SelectItem) (string, error) {
if len(items) == 0 {
return "", fmt.Errorf("no items to select from")
@@ -252,6 +401,7 @@ type multiSelectorModel struct {
checkOrder []int
cancelled bool
confirmed bool
width int
}
func newMultiSelectorModel(title string, items []SelectItem, preChecked []string) multiSelectorModel {
@@ -290,6 +440,50 @@ func (m multiSelectorModel) filteredItems() []SelectItem {
return result
}
// otherStart returns the index of the first non-recommended item in the filtered list.
func (m multiSelectorModel) otherStart() int {
if m.filter != "" {
return 0
}
filtered := m.filteredItems()
for i, item := range filtered {
if !item.Recommended {
return i
}
}
return len(filtered)
}
// updateScroll adjusts scrollOffset for section-based scrolling (matches single-select).
func (m *multiSelectorModel) updateScroll(otherStart int) {
if m.filter != "" {
if m.cursor < m.scrollOffset {
m.scrollOffset = m.cursor
}
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
return
}
if m.cursor < otherStart {
m.scrollOffset = 0
return
}
posInOthers := m.cursor - otherStart
maxOthers := maxSelectorItems - otherStart
if maxOthers < 3 {
maxOthers = 3
}
if posInOthers < m.scrollOffset {
m.scrollOffset = posInOthers
}
if posInOthers >= m.scrollOffset+maxOthers {
m.scrollOffset = posInOthers - maxOthers + 1
}
}
func (m *multiSelectorModel) toggleItem() {
filtered := m.filteredItems()
if len(filtered) == 0 || m.cursor >= len(filtered) {
@@ -323,6 +517,14 @@ func (m multiSelectorModel) Init() tea.Cmd {
func (m multiSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.WindowSizeMsg:
wasSet := m.width > 0
m.width = msg.Width
if wasSet {
return m, tea.EnterAltScreen
}
return m, nil
case tea.KeyMsg:
filtered := m.filteredItems()
@@ -332,30 +534,24 @@ func (m multiSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, tea.Quit
case tea.KeyEnter:
// Enter confirms if at least one item is selected
if len(m.checkOrder) > 0 {
m.confirmed = true
return m, tea.Quit
}
case tea.KeySpace:
// Space always toggles selection
m.toggleItem()
case tea.KeyUp:
if m.cursor > 0 {
m.cursor--
if m.cursor < m.scrollOffset {
m.scrollOffset = m.cursor
}
m.updateScroll(m.otherStart())
}
case tea.KeyDown:
if m.cursor < len(filtered)-1 {
m.cursor++
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
m.updateScroll(m.otherStart())
}
case tea.KeyPgUp:
@@ -363,19 +559,14 @@ func (m multiSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if m.cursor < 0 {
m.cursor = 0
}
m.scrollOffset -= maxSelectorItems
if m.scrollOffset < 0 {
m.scrollOffset = 0
}
m.updateScroll(m.otherStart())
case tea.KeyPgDown:
m.cursor += maxSelectorItems
if m.cursor >= len(filtered) {
m.cursor = len(filtered) - 1
}
if m.cursor >= m.scrollOffset+maxSelectorItems {
m.scrollOffset = m.cursor - maxSelectorItems + 1
}
m.updateScroll(m.otherStart())
case tea.KeyBackspace:
if len(m.filter) > 0 {
@@ -394,15 +585,41 @@ func (m multiSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
}
func (m multiSelectorModel) renderMultiItem(s *strings.Builder, item SelectItem, idx int) {
origIdx := m.itemIndex[item.Name]
var check string
if m.checked[origIdx] {
check = "[x] "
} else {
check = "[ ] "
}
suffix := ""
if len(m.checkOrder) > 0 && m.checkOrder[0] == origIdx {
suffix = " " + selectorDefaultTagStyle.Render("(default)")
}
if idx == m.cursor {
s.WriteString(selectorSelectedItemStyle.Render("▸ " + check + item.Name))
} else {
s.WriteString(selectorItemStyle.Render(check + item.Name))
}
s.WriteString(suffix)
s.WriteString("\n")
if item.Description != "" {
s.WriteString(selectorDescLineStyle.Render(item.Description))
s.WriteString("\n")
}
}
func (m multiSelectorModel) View() string {
// Clear screen when exiting
if m.cancelled || m.confirmed {
return ""
}
var s strings.Builder
// Title with filter
s.WriteString(selectorTitleStyle.Render(m.title))
s.WriteString(" ")
if m.filter == "" {
@@ -417,51 +634,69 @@ func (m multiSelectorModel) View() string {
if len(filtered) == 0 {
s.WriteString(selectorItemStyle.Render(selectorDescStyle.Render("(no matches)")))
s.WriteString("\n")
} else {
} else if m.filter != "" {
// Filtering: flat scroll through all matches
displayCount := min(len(filtered), maxSelectorItems)
for i := range displayCount {
idx := m.scrollOffset + i
if idx >= len(filtered) {
break
}
item := filtered[idx]
origIdx := m.itemIndex[item.Name]
// Checkbox
var checkbox string
if m.checked[origIdx] {
checkbox = selectorCheckboxCheckedStyle.Render("[x]")
} else {
checkbox = selectorCheckboxStyle.Render("[ ]")
}
// Cursor and name
var line string
if idx == m.cursor {
line = selectorSelectedItemStyle.Render("▸ ") + checkbox + " " + selectorSelectedItemStyle.Render(item.Name)
} else {
line = " " + checkbox + " " + item.Name
}
// Default tag
if len(m.checkOrder) > 0 && m.checkOrder[0] == origIdx {
line += " " + selectorDefaultTagStyle.Render("(default)")
}
s.WriteString(line)
s.WriteString("\n")
m.renderMultiItem(&s, filtered[idx], idx)
}
if remaining := len(filtered) - m.scrollOffset - displayCount; remaining > 0 {
s.WriteString(selectorMoreStyle.Render(fmt.Sprintf("... and %d more", remaining)))
s.WriteString("\n")
}
} else {
// Split into pinned recommended and scrollable others (matches single-select layout)
var recItems, otherItems []int
for i, item := range filtered {
if item.Recommended {
recItems = append(recItems, i)
} else {
otherItems = append(otherItems, i)
}
}
// Always render all recommended items (pinned)
if len(recItems) > 0 {
s.WriteString(sectionHeaderStyle.Render("Recommended"))
s.WriteString("\n")
for _, idx := range recItems {
m.renderMultiItem(&s, filtered[idx], idx)
}
}
if len(otherItems) > 0 {
s.WriteString("\n")
s.WriteString(sectionHeaderStyle.Render("More"))
s.WriteString("\n")
maxOthers := maxSelectorItems - len(recItems)
if maxOthers < 3 {
maxOthers = 3
}
displayCount := min(len(otherItems), maxOthers)
for i := range displayCount {
idx := m.scrollOffset + i
if idx >= len(otherItems) {
break
}
m.renderMultiItem(&s, filtered[otherItems[idx]], otherItems[idx])
}
if remaining := len(otherItems) - m.scrollOffset - displayCount; remaining > 0 {
s.WriteString(selectorMoreStyle.Render(fmt.Sprintf("... and %d more", remaining)))
s.WriteString("\n")
}
}
}
s.WriteString("\n")
// Status line
count := m.selectedCount()
if count == 0 {
s.WriteString(selectorDescStyle.Render(" Select at least one model."))
@@ -472,10 +707,13 @@ func (m multiSelectorModel) View() string {
s.WriteString(selectorHelpStyle.Render("↑/↓ navigate • space toggle • enter confirm • esc cancel"))
return s.String()
result := s.String()
if m.width > 0 {
return lipgloss.NewStyle().MaxWidth(m.width).Render(result)
}
return result
}
// SelectMultiple prompts the user to select multiple items from a list.
func SelectMultiple(title string, items []SelectItem, preChecked []string) ([]string, error) {
if len(items) == 0 {
return nil, fmt.Errorf("no items to select from")

573
cmd/tui/selector_test.go Normal file
View File

@@ -0,0 +1,573 @@
package tui
import (
"strings"
"testing"
tea "github.com/charmbracelet/bubbletea"
)
func items(names ...string) []SelectItem {
var out []SelectItem
for _, n := range names {
out = append(out, SelectItem{Name: n})
}
return out
}
func recItems(names ...string) []SelectItem {
var out []SelectItem
for _, n := range names {
out = append(out, SelectItem{Name: n, Recommended: true})
}
return out
}
func mixedItems() []SelectItem {
return []SelectItem{
{Name: "rec-a", Recommended: true},
{Name: "rec-b", Recommended: true},
{Name: "other-1"},
{Name: "other-2"},
{Name: "other-3"},
{Name: "other-4"},
{Name: "other-5"},
{Name: "other-6"},
{Name: "other-7"},
{Name: "other-8"},
{Name: "other-9"},
{Name: "other-10"},
}
}
func TestFilteredItems(t *testing.T) {
tests := []struct {
name string
items []SelectItem
filter string
want []string
}{
{
name: "no filter returns all",
items: items("alpha", "beta", "gamma"),
filter: "",
want: []string{"alpha", "beta", "gamma"},
},
{
name: "filter matches substring",
items: items("llama3.2", "qwen3:8b", "llama2"),
filter: "llama",
want: []string{"llama3.2", "llama2"},
},
{
name: "filter is case insensitive",
items: items("Qwen3:8b", "llama3.2"),
filter: "QWEN",
want: []string{"Qwen3:8b"},
},
{
name: "no matches",
items: items("alpha", "beta"),
filter: "zzz",
want: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := selectorModel{items: tt.items, filter: tt.filter}
got := m.filteredItems()
var gotNames []string
for _, item := range got {
gotNames = append(gotNames, item.Name)
}
if len(gotNames) != len(tt.want) {
t.Fatalf("got %v, want %v", gotNames, tt.want)
}
for i := range tt.want {
if gotNames[i] != tt.want[i] {
t.Errorf("index %d: got %q, want %q", i, gotNames[i], tt.want[i])
}
}
})
}
}
func TestOtherStart(t *testing.T) {
tests := []struct {
name string
items []SelectItem
filter string
want int
}{
{
name: "all recommended",
items: recItems("a", "b", "c"),
want: 3,
},
{
name: "none recommended",
items: items("a", "b"),
want: 0,
},
{
name: "mixed",
items: []SelectItem{
{Name: "rec-a", Recommended: true},
{Name: "rec-b", Recommended: true},
{Name: "other-1"},
{Name: "other-2"},
},
want: 2,
},
{
name: "empty",
items: nil,
want: 0,
},
{
name: "filtering returns 0",
items: []SelectItem{
{Name: "rec-a", Recommended: true},
{Name: "other-1"},
},
filter: "rec",
want: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := selectorModel{items: tt.items, filter: tt.filter}
if got := m.otherStart(); got != tt.want {
t.Errorf("otherStart() = %d, want %d", got, tt.want)
}
})
}
}
func TestUpdateScroll(t *testing.T) {
tests := []struct {
name string
cursor int
offset int
otherStart int
filter string
wantOffset int
}{
{
name: "cursor in recommended resets scroll",
cursor: 1,
offset: 5,
otherStart: 3,
wantOffset: 0,
},
{
name: "cursor at start of others",
cursor: 2,
offset: 0,
otherStart: 2,
wantOffset: 0,
},
{
name: "cursor scrolls down in others",
cursor: 12,
offset: 0,
otherStart: 2,
wantOffset: 3, // posInOthers=10, maxOthers=8, 10-8+1=3
},
{
name: "cursor scrolls up in others",
cursor: 4,
offset: 5,
otherStart: 2,
wantOffset: 2, // posInOthers=2 < offset=5
},
{
name: "filter mode standard scroll down",
cursor: 12,
offset: 0,
filter: "x",
otherStart: 0,
wantOffset: 3, // 12 - 10 + 1 = 3
},
{
name: "filter mode standard scroll up",
cursor: 2,
offset: 5,
filter: "x",
otherStart: 0,
wantOffset: 2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := selectorModel{
cursor: tt.cursor,
scrollOffset: tt.offset,
filter: tt.filter,
}
m.updateScroll(tt.otherStart)
if m.scrollOffset != tt.wantOffset {
t.Errorf("scrollOffset = %d, want %d", m.scrollOffset, tt.wantOffset)
}
})
}
}
func TestRenderContent_SectionHeaders(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: []SelectItem{
{Name: "rec-a", Recommended: true},
{Name: "other-1"},
},
}
content := m.renderContent()
if !strings.Contains(content, "Recommended") {
t.Error("should contain 'Recommended' header")
}
if !strings.Contains(content, "More") {
t.Error("should contain 'More' header")
}
}
func TestRenderContent_FilteredHeader(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: items("alpha", "beta", "alphabet"),
filter: "alpha",
}
content := m.renderContent()
if !strings.Contains(content, "Top Results") {
t.Error("filtered view should contain 'Top Results' header")
}
if strings.Contains(content, "Recommended") {
t.Error("filtered view should not contain 'Recommended' header")
}
}
func TestRenderContent_NoMatches(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: items("alpha"),
filter: "zzz",
}
content := m.renderContent()
if !strings.Contains(content, "(no matches)") {
t.Error("should show '(no matches)' when filter has no results")
}
}
func TestRenderContent_SelectedItemIndicator(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: items("alpha", "beta"),
cursor: 0,
}
content := m.renderContent()
if !strings.Contains(content, "▸") {
t.Error("selected item should have ▸ indicator")
}
}
func TestRenderContent_Description(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: []SelectItem{
{Name: "alpha", Description: "the first letter"},
},
}
content := m.renderContent()
if !strings.Contains(content, "the first letter") {
t.Error("should render item description")
}
}
func TestRenderContent_PinnedRecommended(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: mixedItems(),
// cursor deep in "More" section
cursor: 8,
scrollOffset: 3,
}
content := m.renderContent()
// Recommended items should always be visible (pinned)
if !strings.Contains(content, "rec-a") {
t.Error("recommended items should always be rendered (pinned)")
}
if !strings.Contains(content, "rec-b") {
t.Error("recommended items should always be rendered (pinned)")
}
}
func TestRenderContent_MoreOverflowIndicator(t *testing.T) {
m := selectorModel{
title: "Pick:",
items: mixedItems(), // 2 rec + 10 other = 12 total, maxSelectorItems=10
}
content := m.renderContent()
if !strings.Contains(content, "... and") {
t.Error("should show overflow indicator when more items than visible")
}
}
func TestUpdateNavigation_CursorBounds(t *testing.T) {
m := selectorModel{
items: items("a", "b", "c"),
cursor: 0,
}
// Up at top stays at 0
m.updateNavigation(keyMsg(KeyUp))
if m.cursor != 0 {
t.Errorf("cursor should stay at 0 when pressing up at top, got %d", m.cursor)
}
// Down moves to 1
m.updateNavigation(keyMsg(KeyDown))
if m.cursor != 1 {
t.Errorf("cursor should be 1 after down, got %d", m.cursor)
}
// Down to end
m.updateNavigation(keyMsg(KeyDown))
m.updateNavigation(keyMsg(KeyDown))
if m.cursor != 2 {
t.Errorf("cursor should be 2 at bottom, got %d", m.cursor)
}
}
func TestUpdateNavigation_FilterResetsState(t *testing.T) {
m := selectorModel{
items: items("alpha", "beta"),
cursor: 1,
scrollOffset: 5,
}
m.updateNavigation(runeMsg('x'))
if m.filter != "x" {
t.Errorf("filter should be 'x', got %q", m.filter)
}
if m.cursor != 0 {
t.Errorf("cursor should reset to 0 on filter, got %d", m.cursor)
}
if m.scrollOffset != 0 {
t.Errorf("scrollOffset should reset to 0 on filter, got %d", m.scrollOffset)
}
}
func TestUpdateNavigation_Backspace(t *testing.T) {
m := selectorModel{
items: items("alpha"),
filter: "abc",
cursor: 1,
}
m.updateNavigation(keyMsg(KeyBackspace))
if m.filter != "ab" {
t.Errorf("filter should be 'ab' after backspace, got %q", m.filter)
}
if m.cursor != 0 {
t.Errorf("cursor should reset to 0 on backspace, got %d", m.cursor)
}
}
// --- ReorderItems ---
func TestReorderItems(t *testing.T) {
input := []SelectItem{
{Name: "local-1"},
{Name: "rec-a", Recommended: true},
{Name: "local-2"},
{Name: "rec-b", Recommended: true},
}
got := ReorderItems(input)
want := []string{"rec-a", "rec-b", "local-1", "local-2"}
for i, item := range got {
if item.Name != want[i] {
t.Errorf("index %d: got %q, want %q", i, item.Name, want[i])
}
}
}
func TestReorderItems_AllRecommended(t *testing.T) {
input := recItems("a", "b", "c")
got := ReorderItems(input)
if len(got) != 3 {
t.Fatalf("expected 3 items, got %d", len(got))
}
for i, item := range got {
if item.Name != input[i].Name {
t.Errorf("order should be preserved, index %d: got %q, want %q", i, item.Name, input[i].Name)
}
}
}
func TestReorderItems_NoneRecommended(t *testing.T) {
input := items("x", "y")
got := ReorderItems(input)
if len(got) != 2 || got[0].Name != "x" || got[1].Name != "y" {
t.Errorf("order should be preserved, got %v", got)
}
}
// --- Multi-select otherStart ---
func TestMultiOtherStart(t *testing.T) {
tests := []struct {
name string
items []SelectItem
filter string
want int
}{
{"all recommended", recItems("a", "b"), "", 2},
{"none recommended", items("a", "b"), "", 0},
{"mixed", mixedItems(), "", 2},
{"with filter returns 0", mixedItems(), "other", 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := newMultiSelectorModel("test", tt.items, nil)
m.filter = tt.filter
if got := m.otherStart(); got != tt.want {
t.Errorf("otherStart() = %d, want %d", got, tt.want)
}
})
}
}
// --- Multi-select updateScroll ---
func TestMultiUpdateScroll(t *testing.T) {
tests := []struct {
name string
cursor int
offset int
otherStart int
wantOffset int
}{
{"cursor in recommended resets scroll", 1, 5, 3, 0},
{"cursor at start of others", 2, 0, 2, 0},
{"cursor scrolls down in others", 12, 0, 2, 3},
{"cursor scrolls up in others", 4, 5, 2, 2},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := newMultiSelectorModel("test", nil, nil)
m.cursor = tt.cursor
m.scrollOffset = tt.offset
m.updateScroll(tt.otherStart)
if m.scrollOffset != tt.wantOffset {
t.Errorf("scrollOffset = %d, want %d", m.scrollOffset, tt.wantOffset)
}
})
}
}
// --- Multi-select View section headers ---
func TestMultiView_SectionHeaders(t *testing.T) {
m := newMultiSelectorModel("Pick:", []SelectItem{
{Name: "rec-a", Recommended: true},
{Name: "other-1"},
}, nil)
content := m.View()
if !strings.Contains(content, "Recommended") {
t.Error("should contain 'Recommended' header")
}
if !strings.Contains(content, "More") {
t.Error("should contain 'More' header")
}
}
func TestMultiView_CursorIndicator(t *testing.T) {
m := newMultiSelectorModel("Pick:", items("a", "b"), nil)
m.cursor = 0
content := m.View()
if !strings.Contains(content, "▸") {
t.Error("should show ▸ cursor indicator")
}
}
func TestMultiView_CheckedItemShowsX(t *testing.T) {
m := newMultiSelectorModel("Pick:", items("a", "b"), []string{"a"})
content := m.View()
if !strings.Contains(content, "[x]") {
t.Error("checked item should show [x]")
}
if !strings.Contains(content, "[ ]") {
t.Error("unchecked item should show [ ]")
}
}
func TestMultiView_DefaultTag(t *testing.T) {
m := newMultiSelectorModel("Pick:", items("a", "b"), []string{"a"})
content := m.View()
if !strings.Contains(content, "(default)") {
t.Error("first checked item should have (default) tag")
}
}
func TestMultiView_PinnedRecommended(t *testing.T) {
m := newMultiSelectorModel("Pick:", mixedItems(), nil)
m.cursor = 8
m.scrollOffset = 3
content := m.View()
if !strings.Contains(content, "rec-a") {
t.Error("recommended items should always be visible (pinned)")
}
if !strings.Contains(content, "rec-b") {
t.Error("recommended items should always be visible (pinned)")
}
}
func TestMultiView_OverflowIndicator(t *testing.T) {
m := newMultiSelectorModel("Pick:", mixedItems(), nil)
content := m.View()
if !strings.Contains(content, "... and") {
t.Error("should show overflow indicator when more items than visible")
}
}
// Key message helpers for testing
type keyType = int
const (
KeyUp keyType = iota
KeyDown keyType = iota
KeyBackspace keyType = iota
)
func keyMsg(k keyType) tea.KeyMsg {
switch k {
case KeyUp:
return tea.KeyMsg{Type: tea.KeyUp}
case KeyDown:
return tea.KeyMsg{Type: tea.KeyDown}
case KeyBackspace:
return tea.KeyMsg{Type: tea.KeyBackspace}
default:
return tea.KeyMsg{}
}
}
func runeMsg(r rune) tea.KeyMsg {
return tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{r}}
}

128
cmd/tui/signin.go Normal file
View File

@@ -0,0 +1,128 @@
package tui
import (
"fmt"
"strings"
"time"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"github.com/ollama/ollama/cmd/config"
)
type signInModel struct {
modelName string
signInURL string
spinner int
width int
userName string
cancelled bool
}
func (m signInModel) Init() tea.Cmd {
return tea.Tick(200*time.Millisecond, func(t time.Time) tea.Msg {
return signInTickMsg{}
})
}
func (m signInModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.WindowSizeMsg:
wasSet := m.width > 0
m.width = msg.Width
if wasSet {
return m, tea.EnterAltScreen
}
return m, nil
case tea.KeyMsg:
switch msg.Type {
case tea.KeyCtrlC, tea.KeyEsc:
m.cancelled = true
return m, tea.Quit
}
case signInTickMsg:
m.spinner++
if m.spinner%5 == 0 {
return m, tea.Batch(
tea.Tick(200*time.Millisecond, func(t time.Time) tea.Msg {
return signInTickMsg{}
}),
checkSignIn,
)
}
return m, tea.Tick(200*time.Millisecond, func(t time.Time) tea.Msg {
return signInTickMsg{}
})
case signInCheckMsg:
if msg.signedIn {
m.userName = msg.userName
return m, tea.Quit
}
}
return m, nil
}
func (m signInModel) View() string {
if m.userName != "" {
return ""
}
return renderSignIn(m.modelName, m.signInURL, m.spinner, m.width)
}
func renderSignIn(modelName, signInURL string, spinner, width int) string {
spinnerFrames := []string{"⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"}
frame := spinnerFrames[spinner%len(spinnerFrames)]
urlColor := lipgloss.NewStyle().
Foreground(lipgloss.Color("117"))
urlWrap := lipgloss.NewStyle().PaddingLeft(2)
if width > 4 {
urlWrap = urlWrap.Width(width - 4)
}
var s strings.Builder
fmt.Fprintf(&s, "To use %s, please sign in.\n\n", selectorSelectedItemStyle.Render(modelName))
// Wrap in OSC 8 hyperlink so the entire URL is clickable even when wrapped.
// Padding is outside the hyperlink so spaces don't get underlined.
link := fmt.Sprintf("\033]8;;%s\033\\%s\033]8;;\033\\", signInURL, urlColor.Render(signInURL))
s.WriteString("Navigate to:\n")
s.WriteString(urlWrap.Render(link))
s.WriteString("\n\n")
s.WriteString(lipgloss.NewStyle().Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"}).Render(
frame + " Waiting for sign in to complete..."))
s.WriteString("\n\n")
s.WriteString(selectorHelpStyle.Render("esc cancel"))
return lipgloss.NewStyle().PaddingLeft(2).Render(s.String())
}
// RunSignIn shows a bubbletea sign-in dialog and polls until the user signs in or cancels.
func RunSignIn(modelName, signInURL string) (string, error) {
config.OpenBrowser(signInURL)
m := signInModel{
modelName: modelName,
signInURL: signInURL,
}
p := tea.NewProgram(m)
finalModel, err := p.Run()
if err != nil {
return "", fmt.Errorf("error running sign-in: %w", err)
}
fm := finalModel.(signInModel)
if fm.cancelled {
return "", ErrCancelled
}
return fm.userName, nil
}

175
cmd/tui/signin_test.go Normal file
View File

@@ -0,0 +1,175 @@
package tui
import (
"strings"
"testing"
tea "github.com/charmbracelet/bubbletea"
)
func TestRenderSignIn_ContainsModelName(t *testing.T) {
got := renderSignIn("glm-4.7:cloud", "https://example.com/signin", 0, 80)
if !strings.Contains(got, "glm-4.7:cloud") {
t.Error("should contain model name")
}
if !strings.Contains(got, "please sign in") {
t.Error("should contain sign-in prompt")
}
}
func TestRenderSignIn_ContainsURL(t *testing.T) {
url := "https://ollama.com/connect?key=abc123"
got := renderSignIn("test:cloud", url, 0, 120)
if !strings.Contains(got, url) {
t.Errorf("should contain URL %q", url)
}
}
func TestRenderSignIn_OSC8Hyperlink(t *testing.T) {
url := "https://ollama.com/connect?key=abc123"
got := renderSignIn("test:cloud", url, 0, 120)
// Should contain OSC 8 open sequence with the URL
osc8Open := "\033]8;;" + url + "\033\\"
if !strings.Contains(got, osc8Open) {
t.Error("should contain OSC 8 open sequence with URL")
}
// Should contain OSC 8 close sequence
osc8Close := "\033]8;;\033\\"
if !strings.Contains(got, osc8Close) {
t.Error("should contain OSC 8 close sequence")
}
}
func TestRenderSignIn_ContainsSpinner(t *testing.T) {
got := renderSignIn("test:cloud", "https://example.com", 0, 80)
if !strings.Contains(got, "Waiting for sign in to complete") {
t.Error("should contain waiting message")
}
if !strings.Contains(got, "⠋") {
t.Error("should contain first spinner frame at spinner=0")
}
}
func TestRenderSignIn_SpinnerAdvances(t *testing.T) {
got0 := renderSignIn("test:cloud", "https://example.com", 0, 80)
got1 := renderSignIn("test:cloud", "https://example.com", 1, 80)
if got0 == got1 {
t.Error("different spinner values should produce different output")
}
}
func TestRenderSignIn_ContainsEscHelp(t *testing.T) {
got := renderSignIn("test:cloud", "https://example.com", 0, 80)
if !strings.Contains(got, "esc cancel") {
t.Error("should contain esc cancel help text")
}
}
func TestSignInModel_EscCancels(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyEsc})
fm := updated.(signInModel)
if !fm.cancelled {
t.Error("esc should set cancelled=true")
}
if cmd == nil {
t.Error("esc should return tea.Quit")
}
}
func TestSignInModel_CtrlCCancels(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
}
updated, cmd := m.Update(tea.KeyMsg{Type: tea.KeyCtrlC})
fm := updated.(signInModel)
if !fm.cancelled {
t.Error("ctrl+c should set cancelled=true")
}
if cmd == nil {
t.Error("ctrl+c should return tea.Quit")
}
}
func TestSignInModel_SignedInQuitsClean(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
}
updated, cmd := m.Update(signInCheckMsg{signedIn: true, userName: "alice"})
fm := updated.(signInModel)
if fm.userName != "alice" {
t.Errorf("expected userName 'alice', got %q", fm.userName)
}
if cmd == nil {
t.Error("successful sign-in should return tea.Quit")
}
}
func TestSignInModel_SignedInViewClears(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
userName: "alice",
}
got := m.View()
if got != "" {
t.Errorf("View should return empty string after sign-in, got %q", got)
}
}
func TestSignInModel_NotSignedInContinues(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
}
updated, _ := m.Update(signInCheckMsg{signedIn: false})
fm := updated.(signInModel)
if fm.userName != "" {
t.Error("should not set userName when not signed in")
}
if fm.cancelled {
t.Error("should not cancel when check returns not signed in")
}
}
func TestSignInModel_WindowSizeUpdatesWidth(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
}
updated, _ := m.Update(tea.WindowSizeMsg{Width: 120, Height: 40})
fm := updated.(signInModel)
if fm.width != 120 {
t.Errorf("expected width 120, got %d", fm.width)
}
}
func TestSignInModel_TickAdvancesSpinner(t *testing.T) {
m := signInModel{
modelName: "test:cloud",
signInURL: "https://example.com",
spinner: 0,
}
updated, cmd := m.Update(signInTickMsg{})
fm := updated.(signInModel)
if fm.spinner != 1 {
t.Errorf("expected spinner=1, got %d", fm.spinner)
}
if cmd == nil {
t.Error("tick should return a command")
}
}

View File

@@ -4,8 +4,6 @@ import (
"context"
"errors"
"fmt"
"os/exec"
"runtime"
"strings"
"time"
@@ -17,37 +15,30 @@ import (
)
var (
titleStyle = lipgloss.NewStyle().
Bold(true).
MarginBottom(1)
versionStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("245"))
Foreground(lipgloss.AdaptiveColor{Light: "243", Dark: "250"})
itemStyle = lipgloss.NewStyle().
menuItemStyle = lipgloss.NewStyle().
PaddingLeft(2)
selectedStyle = lipgloss.NewStyle().
PaddingLeft(2).
Bold(true)
menuSelectedItemStyle = lipgloss.NewStyle().
Bold(true).
Background(lipgloss.AdaptiveColor{Light: "254", Dark: "236"})
greyedStyle = lipgloss.NewStyle().
PaddingLeft(2).
Foreground(lipgloss.Color("241"))
menuDescStyle = selectorDescStyle.
PaddingLeft(4)
greyedSelectedStyle = lipgloss.NewStyle().
PaddingLeft(2).
Foreground(lipgloss.Color("243"))
greyedStyle = menuItemStyle.
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"})
descStyle = lipgloss.NewStyle().
PaddingLeft(4).
Foreground(lipgloss.Color("241"))
greyedSelectedStyle = menuSelectedItemStyle.
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"})
modelStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("245"))
Foreground(lipgloss.AdaptiveColor{Light: "243", Dark: "250"})
notInstalledStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("241")).
Foreground(lipgloss.AdaptiveColor{Light: "242", Dark: "246"}).
Italic(true)
)
@@ -55,94 +46,106 @@ type menuItem struct {
title string
description string
integration string // integration name for loading model config, empty if not an integration
isRunModel bool // true for the "Run a model" option
isOthers bool // true for the "Others..." toggle item
isRunModel bool
isOthers bool
}
var mainMenuItems = []menuItem{
{
title: "Run a model",
description: "Start an interactive chat with a local model",
description: "Start an interactive chat with a model",
isRunModel: true,
},
{
title: "Launch Claude Code",
description: "Open Claude Code AI assistant",
description: "Agentic coding across large codebases",
integration: "claude",
},
{
title: "Launch Codex",
description: "Open Codex CLI",
description: "OpenAI's open-source coding agent",
integration: "codex",
},
{
title: "Launch Open Claw",
description: "Open the Open Claw integration",
title: "Launch OpenClaw",
description: "Personal AI with 100+ skills",
integration: "openclaw",
},
}
var othersMenuItem = menuItem{
title: "Others...",
title: "More...",
description: "Show additional integrations",
isOthers: true,
}
// getOtherIntegrations returns the list of other integrations, filtering out
// Codex if it's not installed (since it requires npm install).
// getOtherIntegrations dynamically builds the "Others" list from the integration
// registry, excluding any integrations already present in the pinned mainMenuItems.
func getOtherIntegrations() []menuItem {
return []menuItem{
{
title: "Launch Droid",
description: "Open Droid integration",
integration: "droid",
},
{
title: "Launch Open Code",
description: "Open Open Code integration",
integration: "opencode",
},
{
title: "Launch Pi",
description: "Open Pi coding agent",
integration: "pi",
},
pinned := map[string]bool{
"run": true, // not an integration but in the pinned list
}
for _, item := range mainMenuItems {
if item.integration != "" {
pinned[item.integration] = true
}
}
var others []menuItem
for _, info := range config.ListIntegrationInfos() {
if pinned[info.Name] {
continue
}
desc := info.Description
if desc == "" {
desc = "Open " + info.DisplayName + " integration"
}
others = append(others, menuItem{
title: "Launch " + info.DisplayName,
description: desc,
integration: info.Name,
})
}
return others
}
type model struct {
items []menuItem
cursor int
quitting bool
selected bool // true if user made a selection (enter/space)
changeModel bool // true if user pressed right arrow to change model
showOthers bool // true if "Others..." is expanded
availableModels map[string]bool // cache of available model names
selected bool
changeModel bool
changeModels []string // multi-select result for Editor integrations
showOthers bool
availableModels map[string]bool
err error
// Modal state
showingModal bool // true when model picker modal is visible
modalSelector selectorModel // the selector model for the modal
modalItems []SelectItem // cached items for the modal
showingModal bool
modalSelector selectorModel
modalItems []SelectItem
// Sign-in dialog state
showingSignIn bool // true when sign-in dialog is visible
signInURL string // URL for sign-in
signInModel string // model that requires sign-in
signInSpinner int // spinner frame index
showingMultiModal bool
multiModalSelector multiSelectorModel
showingSignIn bool
signInURL string
signInModel string
signInSpinner int
signInFromModal bool // true if sign-in was triggered from modal (not main menu)
width int // terminal width from WindowSizeMsg
statusMsg string // temporary status message shown near help text
}
// signInTickMsg is sent to animate the sign-in spinner
type signInTickMsg struct{}
// signInCheckMsg is sent to check if sign-in is complete
type signInCheckMsg struct {
signedIn bool
userName string
}
// modelExists checks if a model exists in the cached available models.
type clearStatusMsg struct{}
func (m *model) modelExists(name string) bool {
if m.availableModels == nil || name == "" {
return false
@@ -159,27 +162,52 @@ func (m *model) modelExists(name string) bool {
return false
}
// buildModalItems creates the list of models for the modal selector.
func (m *model) buildModalItems() []SelectItem {
modelItems, _ := config.GetModelItems(context.Background())
var items []SelectItem
for _, item := range modelItems {
items = append(items, SelectItem{Name: item.Name, Description: item.Description})
}
return items
return ReorderItems(ConvertItems(modelItems))
}
// openModelModal opens the model picker modal.
func (m *model) openModelModal() {
func (m *model) openModelModal(currentModel string) {
m.modalItems = m.buildModalItems()
m.modalSelector = selectorModel{
title: "Select model:",
items: m.modalItems,
cursor := 0
if currentModel != "" {
for i, item := range m.modalItems {
if item.Name == currentModel || strings.HasPrefix(item.Name, currentModel+":") || strings.HasPrefix(currentModel, item.Name+":") {
cursor = i
break
}
}
}
m.modalSelector = selectorModel{
title: "Select model:",
items: m.modalItems,
cursor: cursor,
helpText: "↑/↓ navigate • enter select • ← back",
}
m.modalSelector.updateScroll(m.modalSelector.otherStart())
m.showingModal = true
}
// isCloudModel returns true if the model name indicates a cloud model.
func (m *model) openMultiModelModal(integration string) {
items := m.buildModalItems()
var preChecked []string
if models := config.IntegrationModels(integration); len(models) > 0 {
preChecked = models
}
m.multiModalSelector = newMultiSelectorModel("Select models:", items, preChecked)
// Set cursor to the first pre-checked (last used) model
if len(preChecked) > 0 {
for i, item := range items {
if item.Name == preChecked[0] {
m.multiModalSelector.cursor = i
m.multiModalSelector.updateScroll(m.multiModalSelector.otherStart())
break
}
}
}
m.showingMultiModal = true
}
func isCloudModel(name string) bool {
return strings.HasSuffix(name, ":cloud")
}
@@ -196,7 +224,7 @@ func (m *model) checkCloudSignIn(modelName string, fromModal bool) tea.Cmd {
}
user, err := client.Whoami(context.Background())
if err == nil && user != nil && user.Name != "" {
return nil // Already signed in
return nil
}
var aErr api.AuthorizationError
if errors.As(err, &aErr) && aErr.SigninURL != "" {
@@ -215,23 +243,13 @@ func (m *model) startSignIn(modelName, signInURL string, fromModal bool) tea.Cmd
m.signInSpinner = 0
m.signInFromModal = fromModal
// Open browser (best effort)
switch runtime.GOOS {
case "darwin":
_ = exec.Command("open", signInURL).Start()
case "linux":
_ = exec.Command("xdg-open", signInURL).Start()
case "windows":
_ = exec.Command("rundll32", "url.dll,FileProtocolHandler", signInURL).Start()
}
config.OpenBrowser(signInURL)
// Start the spinner tick
return tea.Tick(200*time.Millisecond, func(t time.Time) tea.Msg {
return signInTickMsg{}
})
}
// checkSignIn checks if the user has completed sign-in.
func checkSignIn() tea.Msg {
client, err := api.ClientFromEnvironment()
if err != nil {
@@ -244,7 +262,6 @@ func checkSignIn() tea.Msg {
return signInCheckMsg{signedIn: false}
}
// loadAvailableModels fetches and caches the list of available models.
func (m *model) loadAvailableModels() {
m.availableModels = make(map[string]bool)
client, err := api.ClientFromEnvironment()
@@ -266,24 +283,17 @@ func (m *model) buildItems() {
m.items = append(m.items, mainMenuItems...)
if m.showOthers {
// Change "Others..." to "Hide others..."
hideItem := menuItem{
title: "Hide others...",
description: "Hide additional integrations",
isOthers: true,
}
m.items = append(m.items, hideItem)
m.items = append(m.items, others...)
} else {
m.items = append(m.items, othersMenuItem)
}
}
// isOthersIntegration returns true if the integration is in the "Others" menu
func isOthersIntegration(name string) bool {
switch name {
case "droid", "opencode":
return true
for _, item := range getOtherIntegrations() {
if item.integration == name {
return true
}
}
return false
}
@@ -294,7 +304,6 @@ func initialModel() model {
}
m.loadAvailableModels()
// Check last selection to determine if we need to expand "Others"
lastSelection := config.LastSelection()
if isOthersIntegration(lastSelection) {
m.showOthers = true
@@ -302,7 +311,6 @@ func initialModel() model {
m.buildItems()
// Position cursor on last selection
if lastSelection != "" {
for i, item := range m.items {
if lastSelection == "run" && item.isRunModel {
@@ -323,18 +331,29 @@ func (m model) Init() tea.Cmd {
}
func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// Handle sign-in dialog
if wmsg, ok := msg.(tea.WindowSizeMsg); ok {
wasSet := m.width > 0
m.width = wmsg.Width
if wasSet {
return m, tea.EnterAltScreen
}
return m, nil
}
if _, ok := msg.(clearStatusMsg); ok {
m.statusMsg = ""
return m, nil
}
if m.showingSignIn {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.Type {
case tea.KeyCtrlC, tea.KeyEsc:
// Cancel sign-in and go back
m.showingSignIn = false
if m.signInFromModal {
m.showingModal = true
}
// If from main menu, just return to main menu (default state)
return m, nil
}
@@ -355,13 +374,10 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case signInCheckMsg:
if msg.signedIn {
// Sign-in complete - proceed with selection
if m.signInFromModal {
// Came from modal - set changeModel
m.modalSelector.selected = m.signInModel
m.changeModel = true
} else {
// Came from main menu - just select
m.selected = true
}
m.quitting = true
@@ -371,13 +387,44 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
}
// Handle modal input if modal is showing
if m.showingMultiModal {
switch msg := msg.(type) {
case tea.KeyMsg:
if msg.Type == tea.KeyLeft {
m.showingMultiModal = false
return m, nil
}
updated, cmd := m.multiModalSelector.Update(msg)
m.multiModalSelector = updated.(multiSelectorModel)
if m.multiModalSelector.cancelled {
m.showingMultiModal = false
return m, nil
}
if m.multiModalSelector.confirmed {
var selected []string
for _, idx := range m.multiModalSelector.checkOrder {
selected = append(selected, m.multiModalSelector.items[idx].Name)
}
if len(selected) > 0 {
m.changeModels = selected
m.changeModel = true
m.quitting = true
return m, tea.Quit
}
m.multiModalSelector.confirmed = false
return m, nil
}
return m, cmd
}
return m, nil
}
if m.showingModal {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.Type {
case tea.KeyCtrlC, tea.KeyEsc:
// Close modal without selection
case tea.KeyCtrlC, tea.KeyEsc, tea.KeyLeft:
m.showingModal = false
return m, nil
@@ -390,63 +437,15 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if cmd := m.checkCloudSignIn(m.modalSelector.selected, true); cmd != nil {
return m, cmd
}
// Selection made - exit with changeModel
m.changeModel = true
m.quitting = true
return m, tea.Quit
}
return m, nil
case tea.KeyUp:
if m.modalSelector.cursor > 0 {
m.modalSelector.cursor--
if m.modalSelector.cursor < m.modalSelector.scrollOffset {
m.modalSelector.scrollOffset = m.modalSelector.cursor
}
}
case tea.KeyDown:
filtered := m.modalSelector.filteredItems()
if m.modalSelector.cursor < len(filtered)-1 {
m.modalSelector.cursor++
if m.modalSelector.cursor >= m.modalSelector.scrollOffset+maxSelectorItems {
m.modalSelector.scrollOffset = m.modalSelector.cursor - maxSelectorItems + 1
}
}
case tea.KeyPgUp:
filtered := m.modalSelector.filteredItems()
m.modalSelector.cursor -= maxSelectorItems
if m.modalSelector.cursor < 0 {
m.modalSelector.cursor = 0
}
m.modalSelector.scrollOffset -= maxSelectorItems
if m.modalSelector.scrollOffset < 0 {
m.modalSelector.scrollOffset = 0
}
_ = filtered // suppress unused warning
case tea.KeyPgDown:
filtered := m.modalSelector.filteredItems()
m.modalSelector.cursor += maxSelectorItems
if m.modalSelector.cursor >= len(filtered) {
m.modalSelector.cursor = len(filtered) - 1
}
if m.modalSelector.cursor >= m.modalSelector.scrollOffset+maxSelectorItems {
m.modalSelector.scrollOffset = m.modalSelector.cursor - maxSelectorItems + 1
}
case tea.KeyBackspace:
if len(m.modalSelector.filter) > 0 {
m.modalSelector.filter = m.modalSelector.filter[:len(m.modalSelector.filter)-1]
m.modalSelector.cursor = 0
m.modalSelector.scrollOffset = 0
}
case tea.KeyRunes:
m.modalSelector.filter += string(msg.Runes)
m.modalSelector.cursor = 0
m.modalSelector.scrollOffset = 0
default:
// Delegate navigation (up/down/pgup/pgdown/filter/backspace) to selectorModel
m.modalSelector.updateNavigation(msg)
}
}
return m, nil
@@ -463,32 +462,30 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if m.cursor > 0 {
m.cursor--
}
// Auto-collapse "Others" when cursor moves back into pinned items
if m.showOthers && m.cursor < len(mainMenuItems) {
m.showOthers = false
m.buildItems()
}
case "down", "j":
if m.cursor < len(m.items)-1 {
m.cursor++
}
// Auto-expand "Others..." when cursor lands on it
if m.cursor < len(m.items) && m.items[m.cursor].isOthers && !m.showOthers {
m.showOthers = true
m.buildItems()
// cursor now points at the first "other" integration
}
case "enter", " ":
item := m.items[m.cursor]
// Handle "Others..." toggle
if item.isOthers {
m.showOthers = !m.showOthers
m.buildItems()
// Keep cursor on the Others/Hide item
if m.cursor >= len(m.items) {
m.cursor = len(m.items) - 1
}
return m, nil
}
// Don't allow selecting uninstalled integrations
if item.integration != "" && !config.IsIntegrationInstalled(item.integration) {
return m, nil
}
// Check if a cloud model is configured and needs sign-in
var configuredModel string
if item.isRunModel {
configuredModel = config.LastModel()
@@ -504,14 +501,22 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, tea.Quit
case "right", "l":
// Allow model change for integrations and run model
item := m.items[m.cursor]
if item.integration != "" || item.isRunModel {
// Don't allow for uninstalled integrations
if item.integration != "" && !config.IsIntegrationInstalled(item.integration) {
return m, nil
}
m.openModelModal()
if item.integration != "" && config.IsEditorIntegration(item.integration) {
m.openMultiModelModal(item.integration)
} else {
var currentModel string
if item.isRunModel {
currentModel = config.LastModel()
} else if item.integration != "" {
currentModel = config.IntegrationModel(item.integration)
}
m.openModelModal(currentModel)
}
}
}
}
@@ -524,21 +529,23 @@ func (m model) View() string {
return ""
}
// Render sign-in dialog if showing
if m.showingSignIn {
return m.renderSignInDialog()
}
// Render modal overlay if showing - replaces main view
if m.showingMultiModal {
return m.multiModalSelector.View()
}
if m.showingModal {
return m.renderModal()
}
s := titleStyle.Render(" Ollama "+versionStyle.Render("v"+version.Version)) + "\n\n"
s := selectorTitleStyle.Render("Ollama "+versionStyle.Render(version.Version)) + "\n\n"
for i, item := range m.items {
cursor := " "
style := itemStyle
cursor := ""
style := menuItemStyle
isInstalled := true
if item.integration != "" {
@@ -548,7 +555,7 @@ func (m model) View() string {
if m.cursor == i {
cursor = "▸ "
if isInstalled {
style = selectedStyle
style = menuSelectedItemStyle
} else {
style = greyedSelectedStyle
}
@@ -557,119 +564,62 @@ func (m model) View() string {
}
title := item.title
var modelSuffix string
if item.integration != "" {
if !isInstalled {
title += " " + notInstalledStyle.Render("(not installed)")
} else if mdl := config.IntegrationModel(item.integration); mdl != "" && m.modelExists(mdl) {
title += " " + modelStyle.Render("("+mdl+")")
} else if m.cursor == i {
if mdl := config.IntegrationModel(item.integration); mdl != "" && m.modelExists(mdl) {
modelSuffix = " " + modelStyle.Render("("+mdl+")")
}
}
} else if item.isRunModel {
} else if item.isRunModel && m.cursor == i {
if mdl := config.LastModel(); mdl != "" && m.modelExists(mdl) {
title += " " + modelStyle.Render("("+mdl+")")
modelSuffix = " " + modelStyle.Render("("+mdl+")")
}
}
s += style.Render(cursor+title) + "\n"
s += descStyle.Render(item.description) + "\n\n"
s += style.Render(cursor+title) + modelSuffix + "\n"
desc := item.description
if !isInstalled && item.integration != "" && m.cursor == i {
if hint := config.IntegrationInstallHint(item.integration); hint != "" {
desc = hint
} else {
desc = "not installed"
}
}
s += menuDescStyle.Render(desc) + "\n\n"
}
s += "\n" + lipgloss.NewStyle().Foreground(lipgloss.Color("241")).Render("↑/↓ navigate • enter select • → change model • esc quit")
if m.statusMsg != "" {
s += "\n" + lipgloss.NewStyle().Foreground(lipgloss.AdaptiveColor{Light: "124", Dark: "210"}).Render(m.statusMsg) + "\n"
}
s += "\n" + selectorHelpStyle.Render("↑/↓ navigate • enter launch • → change model • esc quit")
if m.width > 0 {
return lipgloss.NewStyle().MaxWidth(m.width).Render(s)
}
return s
}
// renderModal renders the model picker modal.
func (m model) renderModal() string {
modalStyle := lipgloss.NewStyle().
Border(lipgloss.RoundedBorder()).
BorderForeground(lipgloss.Color("245")).
Padding(1, 2).
MarginLeft(2)
PaddingBottom(1).
PaddingRight(2)
var content strings.Builder
// Title with filter
content.WriteString(selectorTitleStyle.Render(m.modalSelector.title))
content.WriteString(" ")
if m.modalSelector.filter == "" {
content.WriteString(selectorFilterStyle.Render("Type to filter..."))
} else {
content.WriteString(selectorInputStyle.Render(m.modalSelector.filter))
s := modalStyle.Render(m.modalSelector.renderContent())
if m.width > 0 {
return lipgloss.NewStyle().MaxWidth(m.width).Render(s)
}
content.WriteString("\n\n")
filtered := m.modalSelector.filteredItems()
if len(filtered) == 0 {
content.WriteString(selectorItemStyle.Render(selectorDescStyle.Render("(no matches)")))
content.WriteString("\n")
} else {
displayCount := min(len(filtered), maxSelectorItems)
for i := range displayCount {
idx := m.modalSelector.scrollOffset + i
if idx >= len(filtered) {
break
}
item := filtered[idx]
if idx == m.modalSelector.cursor {
content.WriteString(selectorSelectedItemStyle.Render("▸ " + item.Name))
} else {
content.WriteString(selectorItemStyle.Render(item.Name))
}
if item.Description != "" {
content.WriteString(" ")
content.WriteString(selectorDescStyle.Render("- " + item.Description))
}
content.WriteString("\n")
}
if remaining := len(filtered) - m.modalSelector.scrollOffset - displayCount; remaining > 0 {
content.WriteString(selectorMoreStyle.Render(fmt.Sprintf("... and %d more", remaining)))
content.WriteString("\n")
}
}
content.WriteString("\n")
content.WriteString(selectorHelpStyle.Render("↑/↓ navigate • enter select • esc cancel"))
return modalStyle.Render(content.String())
return s
}
// renderSignInDialog renders the sign-in dialog.
func (m model) renderSignInDialog() string {
dialogStyle := lipgloss.NewStyle().
Border(lipgloss.RoundedBorder()).
BorderForeground(lipgloss.Color("245")).
Padding(1, 2).
MarginLeft(2)
spinnerFrames := []string{"⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"}
spinner := spinnerFrames[m.signInSpinner%len(spinnerFrames)]
var content strings.Builder
content.WriteString(selectorTitleStyle.Render("Sign in required"))
content.WriteString("\n\n")
content.WriteString(fmt.Sprintf("To use %s, please sign in.\n\n", selectedStyle.Render(m.signInModel)))
content.WriteString("Navigate to:\n")
content.WriteString(lipgloss.NewStyle().Foreground(lipgloss.Color("117")).Render(" " + m.signInURL))
content.WriteString("\n\n")
content.WriteString(lipgloss.NewStyle().Foreground(lipgloss.Color("241")).Render(
fmt.Sprintf("%s Waiting for sign in to complete...", spinner)))
content.WriteString("\n\n")
content.WriteString(selectorHelpStyle.Render("esc cancel"))
return dialogStyle.Render(content.String())
return renderSignIn(m.signInModel, m.signInURL, m.signInSpinner, m.width)
}
// Selection represents what the user selected
type Selection int
const (
@@ -680,14 +630,13 @@ const (
SelectionChangeIntegration // Generic change model for integration
)
// Result contains the selection and any associated data
type Result struct {
Selection Selection
Integration string // integration name if applicable
Model string // model name if selected from modal
Integration string // integration name if applicable
Model string // model name if selected from single-select modal
Models []string // models selected from multi-select modal (Editor integrations)
}
// Run starts the TUI and returns the user's selection
func Run() (Result, error) {
m := initialModel()
p := tea.NewProgram(m)
@@ -702,14 +651,12 @@ func Run() (Result, error) {
return Result{Selection: SelectionNone}, fm.err
}
// User quit without selecting
if !fm.selected && !fm.changeModel {
return Result{Selection: SelectionNone}, nil
}
item := fm.items[fm.cursor]
// Handle model change request
if fm.changeModel {
if item.isRunModel {
return Result{
@@ -721,10 +668,10 @@ func Run() (Result, error) {
Selection: SelectionChangeIntegration,
Integration: item.integration,
Model: fm.modalSelector.selected,
Models: fm.changeModels,
}, nil
}
// Handle selection
if item.isRunModel {
return Result{Selection: SelectionRunModel}, nil
}

View File

@@ -596,6 +596,15 @@ components:
name:
type: string
description: Model name
model:
type: string
description: Model name
remote_model:
type: string
description: Name of the upstream model, if the model is remote
remote_host:
type: string
description: URL of the upstream Ollama host, if the model is remote
modified_at:
type: string
description: Last modified timestamp in ISO 8601 format
@@ -636,6 +645,9 @@ components:
Ps:
type: object
properties:
name:
type: string
description: Name of the running model
model:
type: string
description: Name of the running model
@@ -1137,6 +1149,7 @@ paths:
example:
models:
- name: "gemma3"
model: "gemma3"
modified_at: "2025-10-03T23:34:03.409490317-07:00"
size: 3338801804
digest: "a2af6cc3eb7fa8be8504abaf9b04e88f17a119ec3f04a3addf55f92841195f5a"
@@ -1168,7 +1181,8 @@ paths:
$ref: "#/components/schemas/PsResponse"
example:
models:
- model: "gemma3"
- name: "gemma3"
model: "gemma3"
size: 6591830464
digest: "a2af6cc3eb7fa8be8504abaf9b04e88f17a119ec3f04a3addf55f92841195f5a"
details:

View File

@@ -2,7 +2,7 @@
title: Quickstart
---
This quickstart will walk your through running your first model with Ollama. To get started, download Ollama on macOS, Windows or Linux.
Ollama is available on macOS, Windows, and Linux.
<a
href="https://ollama.com/download"
@@ -12,131 +12,48 @@ This quickstart will walk your through running your first model with Ollama. To
Download Ollama
</a>
## Run a model
## Get Started
<Tabs>
<Tab title="CLI">
Open a terminal and run the command:
```sh
ollama run gemma3
```
</Tab>
<Tab title="cURL">
```sh
ollama pull gemma3
```
Lastly, chat with the model:
```shell
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{
"role": "user",
"content": "Hello there!"
}],
"stream": false
}'
```
</Tab>
<Tab title="Python">
Start by downloading a model:
```sh
ollama pull gemma3
```
Then install Ollama's Python library:
```sh
pip install ollama
```
Lastly, chat with the model:
```python
from ollama import chat
from ollama import ChatResponse
response: ChatResponse = chat(model='gemma3', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response['message']['content'])
# or access fields directly from the response object
print(response.message.content)
```
</Tab>
<Tab title="JavaScript">
Start by downloading a model:
```
ollama pull gemma3
```
Then install the Ollama JavaScript library:
```
npm i ollama
```
Lastly, chat with the model:
```shell
import ollama from 'ollama'
const response = await ollama.chat({
model: 'gemma3',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)
```
</Tab>
</Tabs>
See a full list of available models [here](https://ollama.com/models).
## Coding
For coding use cases, we recommend using the `glm-4.7-flash` model.
Note: this model requires 23 GB of VRAM with 64000 tokens context length.
```sh
ollama pull glm-4.7-flash
```
Alternatively, you can use a more powerful cloud model (with full context length):
```sh
ollama pull glm-4.7:cloud
```
Use `ollama launch` to quickly set up a coding tool with Ollama models:
Run `ollama` in your terminal to open the interactive menu:
```sh
ollama launch
ollama
```
### Supported integrations
Navigate with `↑/↓`, press `enter` to launch, `→` to change model, and `esc` to quit.
- [OpenCode](/integrations/opencode) - Open-source coding assistant
- [Claude Code](/integrations/claude-code) - Anthropic's agentic coding tool
- [Codex](/integrations/codex) - OpenAI's coding assistant
- [Droid](/integrations/droid) - Factory's AI coding agent
The menu provides quick access to:
- **Run a model** - Start an interactive chat
- **Launch tools** - Claude Code, Codex, OpenClaw, and more
- **Additional integrations** - Available under "More..."
### Launch with a specific model
## Coding
Launch coding tools with Ollama models:
```sh
ollama launch claude --model glm-4.7-flash
ollama launch claude
```
### Configure without launching
```sh
ollama launch claude --config
ollama launch codex
```
```sh
ollama launch opencode
```
See [integrations](/integrations) for all supported tools.
## API
Use the [API](/api) to integrate Ollama into your applications:
```sh
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{ "role": "user", "content": "Hello!" }]
}'
```
See the [API documentation](/api) for Python, JavaScript, and other integrations.

View File

@@ -216,6 +216,7 @@ func String(s string) func() string {
var (
LLMLibrary = String("OLLAMA_LLM_LIBRARY")
Editor = String("OLLAMA_EDITOR")
CudaVisibleDevices = String("CUDA_VISIBLE_DEVICES")
HipVisibleDevices = String("HIP_VISIBLE_DEVICES")
@@ -291,6 +292,7 @@ func AsMap() map[string]EnvVar {
"OLLAMA_SCHED_SPREAD": {"OLLAMA_SCHED_SPREAD", SchedSpread(), "Always schedule model across all GPUs"},
"OLLAMA_MULTIUSER_CACHE": {"OLLAMA_MULTIUSER_CACHE", MultiUserCache(), "Optimize prompt caching for multi-user scenarios"},
"OLLAMA_CONTEXT_LENGTH": {"OLLAMA_CONTEXT_LENGTH", ContextLength(), "Context length to use unless otherwise specified (default: 4k/32k/256k based on VRAM)"},
"OLLAMA_EDITOR": {"OLLAMA_EDITOR", Editor(), "Path to editor for interactive prompt editing (Ctrl+G)"},
"OLLAMA_NEW_ENGINE": {"OLLAMA_NEW_ENGINE", NewEngine(), "Enable the new Ollama engine"},
"OLLAMA_REMOTES": {"OLLAMA_REMOTES", Remotes(), "Allowed hosts for remote models (default \"ollama.com\")"},

View File

@@ -5,6 +5,7 @@ import (
)
var ErrInterrupt = errors.New("Interrupt")
var ErrEditPrompt = errors.New("EditPrompt")
type InterruptError struct {
Line []rune

View File

@@ -41,6 +41,7 @@ type Instance struct {
Terminal *Terminal
History *History
Pasting bool
Prefill string
pastedLines []string
}
@@ -89,6 +90,27 @@ func (i *Instance) Readline() (string, error) {
buf, _ := NewBuffer(i.Prompt)
// Prefill the buffer with any text that we received from an external editor
if i.Prefill != "" {
lines := strings.Split(i.Prefill, "\n")
i.Prefill = ""
for idx, l := range lines {
for _, r := range l {
buf.Add(r)
}
if idx < len(lines)-1 {
i.pastedLines = append(i.pastedLines, buf.String())
buf.Buf.Clear()
buf.Pos = 0
buf.DisplayPos = 0
buf.LineHasSpace.Clear()
fmt.Println()
fmt.Print(i.Prompt.AltPrompt)
i.Prompt.UseAlt = true
}
}
}
var esc bool
var escex bool
var metaDel bool
@@ -251,6 +273,29 @@ func (i *Instance) Readline() (string, error) {
buf.ClearScreen()
case CharCtrlW:
buf.DeleteWord()
case CharBell:
output := buf.String()
numPastedLines := len(i.pastedLines)
if numPastedLines > 0 {
output = strings.Join(i.pastedLines, "\n") + "\n" + output
i.pastedLines = nil
}
// Move cursor to the last display line of the current buffer
currLine := buf.DisplayPos / buf.LineWidth
lastLine := buf.DisplaySize() / buf.LineWidth
if lastLine > currLine {
fmt.Print(CursorDownN(lastLine - currLine))
}
// Clear all lines from bottom to top: buffer wrapped lines + pasted lines
for range lastLine + numPastedLines {
fmt.Print(CursorBOL + ClearToEOL + CursorUp)
}
fmt.Print(CursorBOL + ClearToEOL)
i.Prompt.UseAlt = false
return output, ErrEditPrompt
case CharCtrlZ:
fd := os.Stdin.Fd()
return handleCharCtrlZ(fd, i.Terminal.termios)

View File

@@ -136,12 +136,64 @@ function Invoke-Download {
Write-Status " Downloading: $Url"
try {
Invoke-WebRequest -Uri $Url -OutFile $OutFile -UseBasicParsing
$size = (Get-Item $OutFile).Length
Write-Status " Downloaded: $([math]::Round($size / 1MB, 1)) MB"
$request = [System.Net.HttpWebRequest]::Create($Url)
$request.AllowAutoRedirect = $true
$response = $request.GetResponse()
$totalBytes = $response.ContentLength
$stream = $response.GetResponseStream()
$fileStream = [System.IO.FileStream]::new($OutFile, [System.IO.FileMode]::Create)
$buffer = [byte[]]::new(65536)
$totalRead = 0
$lastUpdate = [DateTime]::MinValue
$barWidth = 40
try {
while (($read = $stream.Read($buffer, 0, $buffer.Length)) -gt 0) {
$fileStream.Write($buffer, 0, $read)
$totalRead += $read
$now = [DateTime]::UtcNow
if (($now - $lastUpdate).TotalMilliseconds -ge 250) {
if ($totalBytes -gt 0) {
$pct = [math]::Min(100.0, ($totalRead / $totalBytes) * 100)
$filled = [math]::Floor($barWidth * $pct / 100)
$empty = $barWidth - $filled
$bar = ('#' * $filled) + (' ' * $empty)
$pctFmt = $pct.ToString("0.0")
Write-Host -NoNewline "`r$bar ${pctFmt}%"
} else {
$sizeMB = [math]::Round($totalRead / 1MB, 1)
Write-Host -NoNewline "`r${sizeMB} MB downloaded..."
}
$lastUpdate = $now
}
}
# Final progress update
if ($totalBytes -gt 0) {
$bar = '#' * $barWidth
Write-Host "`r$bar 100.0%"
} else {
$sizeMB = [math]::Round($totalRead / 1MB, 1)
Write-Host "`r${sizeMB} MB downloaded. "
}
} finally {
$fileStream.Close()
$stream.Close()
$response.Close()
}
} catch {
if ($_.Exception.Response.StatusCode -eq 404) {
throw "Download failed: not found at $Url"
if ($_.Exception -is [System.Net.WebException]) {
$webEx = [System.Net.WebException]$_.Exception
if ($webEx.Response -and ([System.Net.HttpWebResponse]$webEx.Response).StatusCode -eq [System.Net.HttpStatusCode]::NotFound) {
throw "Download failed: not found at $Url"
}
}
if ($_.Exception.InnerException -is [System.Net.WebException]) {
$webEx = [System.Net.WebException]$_.Exception.InnerException
if ($webEx.Response -and ([System.Net.HttpWebResponse]$webEx.Response).StatusCode -eq [System.Net.HttpStatusCode]::NotFound) {
throw "Download failed: not found at $Url"
}
}
throw "Download failed for ${Url}: $($_.Exception.Message)"
}
@@ -156,7 +208,7 @@ function Invoke-Uninstall {
$regKey = Find-InnoSetupInstall
if (-not $regKey) {
Write-Host "Ollama is not installed."
Write-Host ">>> Ollama is not installed."
return
}
@@ -175,7 +227,7 @@ function Invoke-Uninstall {
return
}
Write-Host "Launching uninstaller..."
Write-Host ">>> Launching uninstaller..."
# Run with GUI so user can choose whether to keep models
Start-Process -FilePath $uninstallExe -Wait
@@ -183,7 +235,7 @@ function Invoke-Uninstall {
if (Find-InnoSetupInstall) {
Write-Warning "Uninstall may not have completed"
} else {
Write-Host "Ollama has been uninstalled."
Write-Host ">>> Ollama has been uninstalled."
}
}
@@ -202,7 +254,7 @@ function Invoke-Install {
# Download installer
Write-Step "Downloading Ollama"
if (-not $DebugInstall) {
Write-Host "Downloading Ollama..."
Write-Host ">>> Downloading Ollama for Windows..."
}
$tempInstaller = Join-Path $env:TEMP "OllamaSetup.exe"
@@ -225,7 +277,7 @@ function Invoke-Install {
# Run installer
Write-Step "Installing Ollama"
if (-not $DebugInstall) {
Write-Host "Installing..."
Write-Host ">>> Installing Ollama..."
}
# Create upgrade marker so the app starts hidden
@@ -257,7 +309,7 @@ function Invoke-Install {
Write-Step "Updating session PATH"
Update-SessionPath
Write-Host "Install complete. You can now run 'ollama'."
Write-Host ">>> Install complete. Run 'ollama' from the command line."
}
# --------------------------------------------------------------------------

10
scripts/install.sh Normal file → Executable file
View File

@@ -72,10 +72,12 @@ if [ "$OS" = "Darwin" ]; then
unzip -q "$TEMP_DIR/Ollama-darwin.zip" -d "$TEMP_DIR"
mv "$TEMP_DIR/Ollama.app" "/Applications/"
status "Adding 'ollama' command to PATH (may require password)..."
mkdir -p "/usr/local/bin" 2>/dev/null || sudo mkdir -p "/usr/local/bin"
ln -sf "/Applications/Ollama.app/Contents/Resources/ollama" "/usr/local/bin/ollama" 2>/dev/null || \
sudo ln -sf "/Applications/Ollama.app/Contents/Resources/ollama" "/usr/local/bin/ollama"
if [ ! -L "/usr/local/bin/ollama" ] || [ "$(readlink "/usr/local/bin/ollama")" != "/Applications/Ollama.app/Contents/Resources/ollama" ]; then
status "Adding 'ollama' command to PATH (may require password)..."
mkdir -p "/usr/local/bin" 2>/dev/null || sudo mkdir -p "/usr/local/bin"
ln -sf "/Applications/Ollama.app/Contents/Resources/ollama" "/usr/local/bin/ollama" 2>/dev/null || \
sudo ln -sf "/Applications/Ollama.app/Contents/Resources/ollama" "/usr/local/bin/ollama"
fi
if [ -z "${OLLAMA_NO_START:-}" ]; then
status "Starting Ollama..."

View File

@@ -21,6 +21,7 @@ import (
"sync"
"time"
"github.com/ollama/ollama/envconfig"
"github.com/ollama/ollama/llm"
"github.com/ollama/ollama/ml"
"github.com/ollama/ollama/x/imagegen/manifest"
@@ -195,7 +196,7 @@ func (s *Server) Ping(ctx context.Context) error {
// waitUntilRunning waits for the subprocess to be ready.
func (s *Server) waitUntilRunning() error {
ctx := context.Background()
timeout := time.After(2 * time.Minute)
timeout := time.After(envconfig.LoadTimeout())
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()

View File

@@ -8,6 +8,7 @@ package mlx
import "C"
import (
"fmt"
"io/fs"
"log/slog"
"os"
@@ -16,6 +17,13 @@ import (
"unsafe"
)
var initError error
// CheckInit returns any error that occurred during MLX dynamic library initialization.
func CheckInit() error {
return initError
}
func init() {
switch runtime.GOOS {
case "darwin":
@@ -34,7 +42,9 @@ func init() {
for _, path := range filepath.SplitList(paths) {
matches, err := fs.Glob(os.DirFS(path), "libmlxc.*")
if err != nil {
panic(err)
initError = fmt.Errorf("failed to glob for MLX libraries in %s: %w", path, err)
slog.Warn("MLX dynamic library not available", "error", initError)
return
}
for _, match := range matches {
@@ -61,5 +71,6 @@ func init() {
}
}
panic("Failed to load any MLX dynamic library")
initError = fmt.Errorf("failed to load any MLX dynamic library from OLLAMA_LIBRARY_PATH=%s", paths)
slog.Warn("MLX dynamic library not available", "error", initError)
}

View File

@@ -7,6 +7,7 @@ import (
"cmp"
"encoding/json"
"flag"
"fmt"
"io"
"log/slog"
"net/http"
@@ -16,12 +17,17 @@ import (
"github.com/ollama/ollama/envconfig"
"github.com/ollama/ollama/logutil"
"github.com/ollama/ollama/x/mlxrunner/mlx"
"github.com/ollama/ollama/x/mlxrunner/sample"
)
func Execute(args []string) error {
slog.SetDefault(logutil.NewLogger(os.Stderr, envconfig.LogLevel()))
if err := mlx.CheckInit(); err != nil {
return fmt.Errorf("MLX not available: %w", err)
}
var (
modelName string
port int