fix(docs): Use notice instead of alert (#9134)

Signed-off-by: Richard Palethorpe <io@richiejp.com>
This commit is contained in:
Richard Palethorpe
2026-03-25 12:55:48 +00:00
committed by GitHub
parent 7209457f53
commit 26384c5c70
3 changed files with 5 additions and 4 deletions

View File

@@ -49,3 +49,4 @@ The project documentation is located in `docs/content`. When adding new features
- **Feature Documentation**: If you add a new feature (like a new backend or API endpoint), create a new markdown file in `docs/content/features/` explaining what it is, how to configure it, and how to use it.
- **Configuration**: If you modify configuration options, update the relevant sections in `docs/content/`.
- **Examples**: providing concrete examples (like YAML configuration blocks) is highly encouraged to help users get started quickly.
- **Shortcodes**: Use `{{% notice note %}}`, `{{% notice tip %}}`, or `{{% notice warning %}}` for callout boxes. Do **not** use `{{% alert %}}` — that shortcode does not exist in this project's Hugo theme and will break the docs build.

View File

@@ -17,9 +17,9 @@ LocalAI supports fine-tuning LLMs directly through the API and Web UI. Fine-tuni
Fine-tuning is always enabled. When authentication is enabled, fine-tuning is a per-user feature (default OFF). Admins can enable it for specific users via the user management API.
{{% alert note %}}
{{% notice note %}}
This feature is **experimental** and may change in future releases.
{{% /alert %}}
{{% /notice %}}
## Quick Start

View File

@@ -7,9 +7,9 @@ url = '/features/quantization/'
LocalAI supports model quantization directly through the API and Web UI. Quantization converts HuggingFace models to GGUF format and compresses them to smaller sizes for efficient inference with llama.cpp.
{{% alert note %}}
{{% notice note %}}
This feature is **experimental** and may change in future releases.
{{% /alert %}}
{{% /notice %}}
## Supported Backends