From 7e1f2657d512e65fe1a6d9cbd12bb72ad2f79ba0 Mon Sep 17 00:00:00 2001 From: Ettore Di Giacinto Date: Sun, 6 Jul 2025 19:03:34 +0200 Subject: [PATCH] Update GPU-acceleration.md Signed-off-by: Ettore Di Giacinto --- docs/content/docs/features/GPU-acceleration.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/content/docs/features/GPU-acceleration.md b/docs/content/docs/features/GPU-acceleration.md index 550c013c1..51bce71fb 100644 --- a/docs/content/docs/features/GPU-acceleration.md +++ b/docs/content/docs/features/GPU-acceleration.md @@ -289,14 +289,14 @@ If using nvidia, follow the steps in the [CUDA](#cudanvidia-acceleration) sectio ### Container images -To use Vulkan, use the images with the `vulkan` tag, for example `{{< version >}}-vulkan-ffmpeg-core`. +To use Vulkan, use the images with the `vulkan` tag, for example `{{< version >}}-gpu-vulkan`. #### Example To run LocalAI with Docker and Vulkan, you can use the following command as an example: ```bash -docker run -p 8080:8080 -e DEBUG=true -v $PWD/models:/models localai/localai:latest-vulkan-ffmpeg-core +docker run -p 8080:8080 -e DEBUG=true -v $PWD/models:/models localai/localai:latest-gpu-vulkan ``` ### Notes @@ -311,5 +311,5 @@ If you have mixed hardware, you can pass flags for multiple GPUs, for example: docker run -p 8080:8080 -e DEBUG=true -v $PWD/models:/models \ --gpus=all \ # nvidia passthrough --device /dev/dri --device /dev/kfd \ # AMD/Intel passthrough -localai/localai:latest-vulkan-ffmpeg-core +localai/localai:latest-gpu-vulkan ```