From 26384c5c70f41ea433d6961f20790ff22d3d5e32 Mon Sep 17 00:00:00 2001 From: Richard Palethorpe Date: Wed, 25 Mar 2026 12:55:48 +0000 Subject: [PATCH] fix(docs): Use notice instead of alert (#9134) Signed-off-by: Richard Palethorpe --- .agents/coding-style.md | 1 + docs/content/features/fine-tuning.md | 4 ++-- docs/content/features/quantization.md | 4 ++-- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/.agents/coding-style.md b/.agents/coding-style.md index 7cc23b569..bedb4f439 100644 --- a/.agents/coding-style.md +++ b/.agents/coding-style.md @@ -49,3 +49,4 @@ The project documentation is located in `docs/content`. When adding new features - **Feature Documentation**: If you add a new feature (like a new backend or API endpoint), create a new markdown file in `docs/content/features/` explaining what it is, how to configure it, and how to use it. - **Configuration**: If you modify configuration options, update the relevant sections in `docs/content/`. - **Examples**: providing concrete examples (like YAML configuration blocks) is highly encouraged to help users get started quickly. +- **Shortcodes**: Use `{{% notice note %}}`, `{{% notice tip %}}`, or `{{% notice warning %}}` for callout boxes. Do **not** use `{{% alert %}}` — that shortcode does not exist in this project's Hugo theme and will break the docs build. diff --git a/docs/content/features/fine-tuning.md b/docs/content/features/fine-tuning.md index 8dcbbd72a..adb04fe96 100644 --- a/docs/content/features/fine-tuning.md +++ b/docs/content/features/fine-tuning.md @@ -17,9 +17,9 @@ LocalAI supports fine-tuning LLMs directly through the API and Web UI. Fine-tuni Fine-tuning is always enabled. When authentication is enabled, fine-tuning is a per-user feature (default OFF). Admins can enable it for specific users via the user management API. -{{% alert note %}} +{{% notice note %}} This feature is **experimental** and may change in future releases. -{{% /alert %}} +{{% /notice %}} ## Quick Start diff --git a/docs/content/features/quantization.md b/docs/content/features/quantization.md index 8455145df..c78e4c6d8 100644 --- a/docs/content/features/quantization.md +++ b/docs/content/features/quantization.md @@ -7,9 +7,9 @@ url = '/features/quantization/' LocalAI supports model quantization directly through the API and Web UI. Quantization converts HuggingFace models to GGUF format and compresses them to smaller sizes for efficient inference with llama.cpp. -{{% alert note %}} +{{% notice note %}} This feature is **experimental** and may change in future releases. -{{% /alert %}} +{{% /notice %}} ## Supported Backends