mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-01 13:42:20 -04:00
- Expand multi-GPU section to cover llama.cpp (CUDA_VISIBLE_DEVICES, HIP_VISIBLE_DEVICES) in addition to diffusers - Add NVIDIA L4T/Jetson section with quick start commands and cross-reference to the dedicated ARM64 page - Add GPU monitoring section with vendor-specific tools (nvidia-smi, rocm-smi, intel_gpu_top) - Add troubleshooting section covering common issues: GPU not detected, CPU fallback, OOM errors, unsupported ROCm targets, SYCL mmap hang - Replace "under construction" warning with useful cross-references to related docs (container images, VRAM management) Signed-off-by: localai-bot <localai-bot@users.noreply.github.com> Co-authored-by: localai-bot <localai-bot@noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>