Files
LocalAI/docs/content/features
LocalAI [bot] 9297074caa docs: expand GPU acceleration guide with L4T, multi-GPU, monitoring, and troubleshooting (#8858)
- Expand multi-GPU section to cover llama.cpp (CUDA_VISIBLE_DEVICES,
  HIP_VISIBLE_DEVICES) in addition to diffusers
- Add NVIDIA L4T/Jetson section with quick start commands and cross-reference
  to the dedicated ARM64 page
- Add GPU monitoring section with vendor-specific tools (nvidia-smi, rocm-smi,
  intel_gpu_top)
- Add troubleshooting section covering common issues: GPU not detected, CPU
  fallback, OOM errors, unsupported ROCm targets, SYCL mmap hang
- Replace "under construction" warning with useful cross-references to related
  docs (container images, VRAM management)

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 21:59:57 +01:00
..
2026-03-07 00:03:08 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00
2025-11-19 22:21:20 +01:00