diff --git a/README.md b/README.md index 99b64fafe..904e00efa 100644 --- a/README.md +++ b/README.md @@ -189,6 +189,8 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml local-ai run oci://localai/phi-2:latest ``` +> ⚡ **Automatic Backend Detection**: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see [GPU Acceleration](https://localai.io/features/gpu-acceleration/#automatic-backend-detection). + For more information, see [💻 Getting started](https://localai.io/basics/getting_started/index.html) ## 📰 Latest project news diff --git a/docs/content/docs/features/GPU-acceleration.md b/docs/content/docs/features/GPU-acceleration.md index 12eba2946..7a953d25c 100644 --- a/docs/content/docs/features/GPU-acceleration.md +++ b/docs/content/docs/features/GPU-acceleration.md @@ -15,6 +15,16 @@ This section contains instruction on how to use LocalAI with GPU acceleration. For acceleration for AMD or Metal HW is still in development, for additional details see the [build]({{%relref "docs/getting-started/build#Acceleration" %}}) {{% /alert %}} +## Automatic Backend Detection + +When you install a model from the gallery (or a YAML file), LocalAI intelligently detects the required backend and your system's capabilities, then downloads the correct version for you. Whether you're running on a standard CPU, an NVIDIA GPU, an AMD GPU, or an Intel GPU, LocalAI handles it automatically. + +For advanced use cases or to override auto-detection, you can use the `LOCALAI_FORCE_META_BACKEND_CAPABILITY` environment variable. Here are the available options: + +- `default`: Forces CPU-only backend. This is the fallback if no specific hardware is detected. +- `nvidia`: Forces backends compiled with CUDA support for NVIDIA GPUs. +- `amd`: Forces backends compiled with ROCm support for AMD GPUs. +- `intel`: Forces backends compiled with SYCL/oneAPI support for Intel GPUs. ## Model configuration diff --git a/docs/content/docs/getting-started/quickstart.md b/docs/content/docs/getting-started/quickstart.md index faa70914a..6d51583a3 100644 --- a/docs/content/docs/getting-started/quickstart.md +++ b/docs/content/docs/getting-started/quickstart.md @@ -106,6 +106,9 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml local-ai run oci://localai/phi-2:latest ``` +{{% alert icon="⚡" %}} +**Automatic Backend Detection**: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see [GPU Acceleration]({{% relref "docs/features/gpu-acceleration#automatic-backend-detection" %}}). +{{% /alert %}} For a full list of options, refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation.