mirror of
https://github.com/mudler/LocalAI.git
synced 2026-02-25 03:06:32 -05:00
docs: add Podman installation documentation (#8646)
* docs: add Podman installation documentation - Add new podman.md with comprehensive installation and usage guide - Cover installation on multiple platforms (Ubuntu, Fedora, Arch, macOS, Windows) - Document GPU support (NVIDIA CUDA, AMD ROCm, Intel, Vulkan) - Include rootless container configuration - Document Docker Compose with podman-compose - Add troubleshooting section for common issues - Link to Podman documentation in installation index - Update image references to use Docker Hub and link to docker docs - Change YAML heredoc to EOF in compose.yaml example - Add curly brackets to notice shortcode and fix link Closes #8645 Signed-off-by: localai-bot <localai-bot@users.noreply.github.com> * docs: merge Docker and Podman docs into unified Containers guide Following the review comment, we have merged the Docker and Podman documentation into a single 'Containers' page that covers both container engines. The Docker and Podman pages now redirect to this unified guide. Changes: - Added new docs/content/installation/containers.md with combined Docker/Podman guide - Updated docs/content/installation/docker.md to redirect to containers - Updated docs/content/installation/podman.md to redirect to containers - Updated docs/content/installation/_index.en.md to link to containers Signed-off-by: LocalAI [bot] <localai-bot@users.noreply.github.com> Signed-off-by: localai-bot <localai-bot@users.noreply.github.com> * docs: remove podman.md as docs are merged into containers.md Signed-off-by: localai-bot <localai-bot@users.noreply.github.com> --------- Signed-off-by: localai-bot <localai-bot@users.noreply.github.com> Signed-off-by: LocalAI [bot] <localai-bot@users.noreply.github.com> Co-authored-by: localai-bot <localai-bot@users.noreply.github.com>
This commit is contained in:
@@ -8,17 +8,11 @@ icon: download
|
||||
|
||||
LocalAI can be installed in multiple ways depending on your platform and preferences.
|
||||
|
||||
{{% notice tip %}}
|
||||
**Recommended: Docker Installation**
|
||||
|
||||
**Docker is the recommended installation method** for most users as it works across all platforms (Linux, macOS, Windows) and provides the easiest setup experience. It's the fastest way to get started with LocalAI.
|
||||
{{% /notice %}}
|
||||
|
||||
## Installation Methods
|
||||
|
||||
Choose the installation method that best suits your needs:
|
||||
|
||||
1. **[Docker](docker/)** ⭐ **Recommended** - Works on all platforms, easiest setup
|
||||
1. **[Containers](containers/)** ⭐ **Recommended** - Works on all platforms, supports Docker and Podman
|
||||
2. **[macOS](macos/)** - Download and install the DMG application
|
||||
3. **[Linux](linux/)** - Install on Linux using binaries
|
||||
4. **[Kubernetes](kubernetes/)** - Deploy LocalAI on Kubernetes clusters
|
||||
@@ -26,10 +20,14 @@ Choose the installation method that best suits your needs:
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Recommended: Docker (works on all platforms)**
|
||||
**Recommended: Containers (Docker or Podman)**
|
||||
|
||||
```bash
|
||||
# With Docker
|
||||
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest
|
||||
|
||||
# Or with Podman
|
||||
podman run -p 8080:8080 --name local-ai -ti localai/localai:latest
|
||||
```
|
||||
|
||||
This will start LocalAI. The API will be available at `http://localhost:8080`. For images with pre-configured models, see [All-in-One images](/getting-started/container-images/#all-in-one-images).
|
||||
@@ -38,4 +36,4 @@ For other platforms:
|
||||
- **macOS**: Download the [DMG](macos/)
|
||||
- **Linux**: See the [Linux installation guide](linux/) for binary installation.
|
||||
|
||||
For detailed instructions, see the [Docker installation guide](docker/).
|
||||
For detailed instructions, see the [Containers installation guide](containers/).
|
||||
|
||||
258
docs/content/installation/containers.md
Normal file
258
docs/content/installation/containers.md
Normal file
@@ -0,0 +1,258 @@
|
||||
---
|
||||
title: Containers
|
||||
description: Install and use LocalAI with container engines (Docker, Podman)
|
||||
weight: 1
|
||||
url: '/installation/containers/'
|
||||
---
|
||||
|
||||
LocalAI supports Docker, Podman, and other OCI-compatible container engines. This guide covers the common aspects of running LocalAI in containers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have a container engine installed:
|
||||
|
||||
- [Install Docker](https://docs.docker.com/get-docker/) (Mac, Windows, Linux)
|
||||
- [Install Podman](https://podman.io/getting-started/installation) (Linux, macOS, Windows WSL2)
|
||||
|
||||
## Quick Start
|
||||
|
||||
The fastest way to get started is with the CPU image:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest
|
||||
# Or with Podman:
|
||||
podman run -p 8080:8080 --name local-ai -ti localai/localai:latest
|
||||
```
|
||||
|
||||
This will:
|
||||
- Start LocalAI (you'll need to install models separately)
|
||||
- Make the API available at `http://localhost:8080`
|
||||
|
||||
## Image Types
|
||||
|
||||
LocalAI provides several image types to suit different needs. These images work with both Docker and Podman.
|
||||
|
||||
### Standard Images
|
||||
|
||||
Standard images don't include pre-configured models. Use these if you want to configure models manually.
|
||||
|
||||
#### CPU Image
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 localai/localai:latest
|
||||
```
|
||||
|
||||
#### GPU Images
|
||||
|
||||
**NVIDIA CUDA 13:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-13
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device nvidia.com/gpu=all localai/localai:latest-gpu-nvidia-cuda-13
|
||||
```
|
||||
|
||||
**NVIDIA CUDA 12:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device nvidia.com/gpu=all localai/localai:latest-gpu-nvidia-cuda-12
|
||||
```
|
||||
|
||||
**AMD GPU (ROCm):**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device rocm.com/gpu=all localai/localai:latest-gpu-hipblas
|
||||
```
|
||||
|
||||
**Intel GPU:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device gpu.intel.com/all localai/localai:latest-gpu-intel
|
||||
```
|
||||
|
||||
**Vulkan:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
|
||||
```
|
||||
|
||||
**NVIDIA Jetson (L4T ARM64):**
|
||||
|
||||
CUDA 12 (for Nvidia AGX Orin and similar platforms):
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --runtime nvidia --gpus all localai/localai:latest-nvidia-l4t-arm64
|
||||
```
|
||||
|
||||
CUDA 13 (for Nvidia DGX Spark):
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --runtime nvidia --gpus all localai/localai:latest-nvidia-l4t-arm64-cuda-13
|
||||
```
|
||||
|
||||
### All-in-One (AIO) Images
|
||||
|
||||
**Recommended for beginners** - These images come pre-configured with models and backends, ready to use immediately.
|
||||
|
||||
#### CPU Image
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
#### GPU Images
|
||||
|
||||
**NVIDIA CUDA 13:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-13
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device nvidia.com/gpu=all localai/localai:latest-aio-gpu-nvidia-cuda-13
|
||||
```
|
||||
|
||||
**NVIDIA CUDA 12:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device nvidia.com/gpu=all localai/localai:latest-aio-gpu-nvidia-cuda-12
|
||||
```
|
||||
|
||||
**AMD GPU (ROCm):**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device rocm.com/gpu=all localai/localai:latest-aio-gpu-hipblas
|
||||
```
|
||||
|
||||
**Intel GPU:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 --device gpu.intel.com/all localai/localai:latest-aio-gpu-intel
|
||||
```
|
||||
|
||||
## Using Compose
|
||||
|
||||
For a more manageable setup, especially with persistent volumes, use Docker Compose or Podman Compose:
|
||||
|
||||
```yaml
|
||||
version: "3.9"
|
||||
services:
|
||||
api:
|
||||
image: localai/localai:latest-aio-cpu
|
||||
# For GPU support, use one of:
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-13
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-12
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-11
|
||||
# image: localai/localai:latest-aio-gpu-hipblas
|
||||
# image: localai/localai:latest-aio-gpu-intel
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
|
||||
interval: 1m
|
||||
timeout: 20m
|
||||
retries: 5
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
- DEBUG=false
|
||||
volumes:
|
||||
- ./models:/models:cached
|
||||
# For NVIDIA GPUs, uncomment:
|
||||
# deploy:
|
||||
# resources:
|
||||
# reservations:
|
||||
# devices:
|
||||
# - driver: nvidia
|
||||
# count: 1
|
||||
# capabilities: [gpu]
|
||||
```
|
||||
|
||||
Save this as `compose.yaml` and run:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
# Or with Podman:
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Persistent Storage
|
||||
|
||||
To persist models and configurations, mount a volume:
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 \
|
||||
-v $PWD/models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
# Or with Podman:
|
||||
podman run -ti --name local-ai -p 8080:8080 \
|
||||
-v $PWD/models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
Or use a named volume:
|
||||
|
||||
```bash
|
||||
docker volume create localai-models
|
||||
docker run -ti --name local-ai -p 8080:8080 \
|
||||
-v localai-models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
# Or with Podman:
|
||||
podman volume create localai-models
|
||||
podman run -ti --name local-ai -p 8080:8080 \
|
||||
-v localai-models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
## What's Included in AIO Images
|
||||
|
||||
All-in-One images come pre-configured with:
|
||||
|
||||
- **Text Generation**: LLM models for chat and completion
|
||||
- **Image Generation**: Stable Diffusion models
|
||||
- **Text to Speech**: TTS models
|
||||
- **Speech to Text**: Whisper models
|
||||
- **Embeddings**: Vector embedding models
|
||||
- **Function Calling**: Support for OpenAI-compatible function calling
|
||||
|
||||
The AIO images use OpenAI-compatible model names (like `gpt-4`, `gpt-4-vision-preview`) but are backed by open-source models. See the [container images documentation](/getting-started/container-images/#all-in-one-images) for the complete mapping.
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation:
|
||||
|
||||
1. Access the WebUI at `http://localhost:8080`
|
||||
2. Check available models: `curl http://localhost:8080/v1/models`
|
||||
3. [Install additional models](/getting-started/models/)
|
||||
4. [Try out examples](/getting-started/try-it-out/)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container won't start
|
||||
|
||||
- Check container engine is running: `docker ps` or `podman ps`
|
||||
- Check port 8080 is available: `netstat -an | grep 8080` (Linux/Mac)
|
||||
- View logs: `docker logs local-ai` or `podman logs local-ai`
|
||||
|
||||
### GPU not detected
|
||||
|
||||
- Ensure Docker has GPU access: `docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi`
|
||||
- For Podman, see the [Podman installation guide](/installation/podman/#gpu-not-detected)
|
||||
- For NVIDIA: Install [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
|
||||
- For AMD: Ensure devices are accessible: `ls -la /dev/kfd /dev/dri`
|
||||
|
||||
### Models not downloading
|
||||
|
||||
- Check internet connection
|
||||
- Verify disk space: `df -h`
|
||||
- Check container logs for errors: `docker logs local-ai` or `podman logs local-ai`
|
||||
|
||||
## See Also
|
||||
|
||||
- [Container Images Reference](/getting-started/container-images/) - Complete image reference
|
||||
- [Install Models](/getting-started/models/) - Install and configure models
|
||||
- [GPU Acceleration](/features/gpu-acceleration/) - GPU setup and optimization
|
||||
- [Kubernetes Installation](/installation/kubernetes/) - Deploy on Kubernetes
|
||||
@@ -1,249 +1,9 @@
|
||||
---
|
||||
title: "Docker Installation"
|
||||
description: "Install LocalAI using Docker containers - the recommended installation method"
|
||||
weight: 1
|
||||
weight: 2
|
||||
url: '/installation/docker/'
|
||||
redirectURI: '/installation/containers/'
|
||||
---
|
||||
|
||||
{{% notice tip %}}
|
||||
**Recommended Installation Method**
|
||||
|
||||
Docker is the recommended way to install LocalAI and provides the easiest setup experience.
|
||||
{{% /notice %}}
|
||||
|
||||
LocalAI provides Docker images that work with Docker, Podman, and other container engines. These images are available on [Docker Hub](https://hub.docker.com/r/localai/localai) and [Quay.io](https://quay.io/repository/go-skynet/local-ai).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have Docker or Podman installed:
|
||||
|
||||
- [Install Docker Desktop](https://docs.docker.com/get-docker/) (Mac, Windows, Linux)
|
||||
- [Install Podman](https://podman.io/getting-started/installation) (Linux alternative)
|
||||
- [Install Docker Engine](https://docs.docker.com/engine/install/) (Linux servers)
|
||||
|
||||
## Quick Start
|
||||
|
||||
The fastest way to get started is with the CPU image:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest
|
||||
```
|
||||
|
||||
This will:
|
||||
- Start LocalAI (you'll need to install models separately)
|
||||
- Make the API available at `http://localhost:8080`
|
||||
|
||||
{{% notice tip %}}
|
||||
**Docker Run vs Docker Start**
|
||||
|
||||
- `docker run` creates and starts a new container. If a container with the same name already exists, this command will fail.
|
||||
- `docker start` starts an existing container that was previously created with `docker run`.
|
||||
|
||||
If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
|
||||
{{% /notice %}}
|
||||
|
||||
## Image Types
|
||||
|
||||
LocalAI provides several image types to suit different needs:
|
||||
|
||||
### Standard Images
|
||||
|
||||
Standard images don't include pre-configured models. Use these if you want to configure models manually.
|
||||
|
||||
#### CPU Image
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
|
||||
```
|
||||
|
||||
#### GPU Images
|
||||
|
||||
**NVIDIA CUDA 13:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-13
|
||||
```
|
||||
|
||||
**NVIDIA CUDA 12:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
|
||||
```
|
||||
|
||||
**AMD GPU (ROCm):**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
|
||||
```
|
||||
|
||||
**Intel GPU:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel
|
||||
```
|
||||
|
||||
**Vulkan:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
|
||||
```
|
||||
|
||||
**NVIDIA Jetson (L4T ARM64):**
|
||||
|
||||
CUDA 12 (for Nvidia AGX Orin and similar platforms):
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --runtime nvidia --gpus all localai/localai:latest-nvidia-l4t-arm64
|
||||
```
|
||||
|
||||
CUDA 13 (for Nvidia DGX Spark):
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --runtime nvidia --gpus all localai/localai:latest-nvidia-l4t-arm64-cuda-13
|
||||
```
|
||||
|
||||
### All-in-One (AIO) Images
|
||||
|
||||
**Recommended for beginners** - These images come pre-configured with models and backends, ready to use immediately.
|
||||
|
||||
#### CPU Image
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
#### GPU Images
|
||||
|
||||
**NVIDIA CUDA 13:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-13
|
||||
```
|
||||
|
||||
**NVIDIA CUDA 12:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
|
||||
```
|
||||
|
||||
**AMD GPU (ROCm):**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
|
||||
```
|
||||
|
||||
**Intel GPU:**
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
|
||||
```
|
||||
|
||||
## Using Docker Compose
|
||||
|
||||
For a more manageable setup, especially with persistent volumes, use Docker Compose:
|
||||
|
||||
```yaml
|
||||
version: "3.9"
|
||||
services:
|
||||
api:
|
||||
image: localai/localai:latest-aio-cpu
|
||||
# For GPU support, use one of:
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-13
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-12
|
||||
# image: localai/localai:latest-aio-gpu-nvidia-cuda-11
|
||||
# image: localai/localai:latest-aio-gpu-hipblas
|
||||
# image: localai/localai:latest-aio-gpu-intel
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
|
||||
interval: 1m
|
||||
timeout: 20m
|
||||
retries: 5
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
- DEBUG=false
|
||||
volumes:
|
||||
- ./models:/models:cached
|
||||
# For NVIDIA GPUs, uncomment:
|
||||
# deploy:
|
||||
# resources:
|
||||
# reservations:
|
||||
# devices:
|
||||
# - driver: nvidia
|
||||
# count: 1
|
||||
# capabilities: [gpu]
|
||||
```
|
||||
|
||||
Save this as `docker-compose.yml` and run:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Persistent Storage
|
||||
|
||||
To persist models and configurations, mount a volume:
|
||||
|
||||
```bash
|
||||
docker run -ti --name local-ai -p 8080:8080 \
|
||||
-v $PWD/models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
Or use a named volume:
|
||||
|
||||
```bash
|
||||
docker volume create localai-models
|
||||
docker run -ti --name local-ai -p 8080:8080 \
|
||||
-v localai-models:/models \
|
||||
localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
## What's Included in AIO Images
|
||||
|
||||
All-in-One images come pre-configured with:
|
||||
|
||||
- **Text Generation**: LLM models for chat and completion
|
||||
- **Image Generation**: Stable Diffusion models
|
||||
- **Text to Speech**: TTS models
|
||||
- **Speech to Text**: Whisper models
|
||||
- **Embeddings**: Vector embedding models
|
||||
- **Function Calling**: Support for OpenAI-compatible function calling
|
||||
|
||||
The AIO images use OpenAI-compatible model names (like `gpt-4`, `gpt-4-vision-preview`) but are backed by open-source models. See the [container images documentation](/getting-started/container-images/#all-in-one-images) for the complete mapping.
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation:
|
||||
|
||||
1. Access the WebUI at `http://localhost:8080`
|
||||
2. Check available models: `curl http://localhost:8080/v1/models`
|
||||
3. [Install additional models](/getting-started/models/)
|
||||
4. [Try out examples](/getting-started/try-it-out/)
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
For detailed information about:
|
||||
- All available image tags and versions
|
||||
- Advanced Docker configuration options
|
||||
- Custom image builds
|
||||
- Backend management
|
||||
|
||||
See the [Container Images documentation](/getting-started/container-images/).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container won't start
|
||||
|
||||
- Check Docker is running: `docker ps`
|
||||
- Check port 8080 is available: `netstat -an | grep 8080` (Linux/Mac)
|
||||
- View logs: `docker logs local-ai`
|
||||
|
||||
### GPU not detected
|
||||
|
||||
- Ensure Docker has GPU access: `docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi`
|
||||
- For NVIDIA: Install [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
|
||||
- For AMD: Ensure devices are accessible: `ls -la /dev/kfd /dev/dri`
|
||||
|
||||
### Models not downloading
|
||||
|
||||
- Check internet connection
|
||||
- Verify disk space: `df -h`
|
||||
- Check Docker logs for errors: `docker logs local-ai`
|
||||
|
||||
## See Also
|
||||
|
||||
- [Container Images Reference](/getting-started/container-images/) - Complete image reference
|
||||
- [Install Models](/getting-started/models/) - Install and configure models
|
||||
- [GPU Acceleration](/features/gpu-acceleration/) - GPU setup and optimization
|
||||
- [Kubernetes Installation](/installation/kubernetes/) - Deploy on Kubernetes
|
||||
|
||||
See [Containers](/installation/containers/) for the complete guide to running LocalAI with Docker and Podman.
|
||||
|
||||
Reference in New Issue
Block a user