--- title: Quickstart --- This quickstart will walk your through running your first model with Ollama. To get started, download Ollama on macOS, Windows or Linux. Download Ollama ## Run a model Open a terminal and run the command: ```sh ollama run gemma3 ``` ```sh ollama pull gemma3 ``` Lastly, chat with the model: ```shell curl http://localhost:11434/api/chat -d '{ "model": "gemma3", "messages": [{ "role": "user", "content": "Hello there!" }], "stream": false }' ``` Start by downloading a model: ```sh ollama pull gemma3 ``` Then install Ollama's Python library: ```sh pip install ollama ``` Lastly, chat with the model: ```python from ollama import chat from ollama import ChatResponse response: ChatResponse = chat(model='gemma3', messages=[ { 'role': 'user', 'content': 'Why is the sky blue?', }, ]) print(response['message']['content']) # or access fields directly from the response object print(response.message.content) ``` Start by downloading a model: ``` ollama pull gemma3 ``` Then install the Ollama JavaScript library: ``` npm i ollama ``` Lastly, chat with the model: ```shell import ollama from 'ollama' const response = await ollama.chat({ model: 'gemma3', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) console.log(response.message.content) ``` See a full list of available models [here](https://ollama.com/models). ## Coding For coding use cases, we recommend using the `glm-4.7-flash` model. Note: this model requires 23 GB of VRAM with 64000 tokens context length. ```sh ollama pull glm-4.7-flash ``` Alternatively, you can use a more powerful cloud model (with full context length): ```sh ollama pull glm-4.7:cloud ``` Use `ollama launch` to quickly set up a coding tool with Ollama models: ```sh ollama launch ``` ### Supported integrations - [OpenCode](/integrations/opencode) - Open-source coding assistant - [Claude Code](/integrations/claude-code) - Anthropic's agentic coding tool - [Codex](/integrations/codex) - OpenAI's coding assistant - [Droid](/integrations/droid) - Factory's AI coding agent ### Launch with a specific model ```sh ollama launch claude --model glm-4.7-flash ``` ### Configure without launching ```sh ollama launch claude --config ```