mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-01 21:53:01 -04:00
* feat: add documentation for undocumented API endpoints Creates comprehensive documentation for 8 previously undocumented endpoints: - Voice Activity Detection (/v1/vad) - Video Generation (/video) - Sound Generation (/v1/sound-generation) - Backend Monitor (/backend/monitor, /backend/shutdown) - Token Metrics (/tokenMetrics) - P2P endpoints (/api/p2p/* - 5 sub-endpoints) - System Info (/system, /version) Each documentation file includes HTTP method, request/response schemas, curl examples, sample JSON responses, and error codes. * docs: remove token-metrics endpoint documentation per review feedback The token-metrics endpoint is not wired into the HTTP router and should not be documented per reviewer request. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: move system-info documentation to reference section Per review feedback, system-info endpoint docs are better suited for the reference section rather than features. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: localai-bot <localai-bot@noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
48 lines
2.7 KiB
Markdown
48 lines
2.7 KiB
Markdown
+++
|
|
disableToc = false
|
|
title = "Features"
|
|
weight = 8
|
|
icon = "lightbulb"
|
|
type = "chapter"
|
|
url = "/features/"
|
|
+++
|
|
|
|
LocalAI provides a comprehensive set of features for running AI models locally. This section covers all the capabilities and functionalities available in LocalAI.
|
|
|
|
## Core Features
|
|
|
|
- **[Text Generation](text-generation/)** - Generate text with GPT-compatible models using various backends
|
|
- **[Image Generation](image-generation/)** - Create images with Stable Diffusion and other diffusion models
|
|
- **[Audio Processing](audio-to-text/)** - Transcribe audio to text and generate speech from text
|
|
- **[Text to Audio](text-to-audio/)** - Generate speech from text with TTS models
|
|
- **[Sound Generation](sound-generation/)** - Generate music and sound effects from text descriptions
|
|
- **[Voice Activity Detection](voice-activity-detection/)** - Detect speech segments in audio data
|
|
- **[Video Generation](video-generation/)** - Generate videos from text prompts and reference images
|
|
- **[Embeddings](embeddings/)** - Generate vector embeddings for semantic search and RAG applications
|
|
- **[GPT Vision](gpt-vision/)** - Analyze and understand images with vision-language models
|
|
|
|
## Advanced Features
|
|
|
|
- **[OpenAI Functions](openai-functions/)** - Use function calling and tools API with local models
|
|
- **[Realtime API](openai-realtime/)** - Low-latency multi-modal conversations (voice+text) over WebSocket
|
|
- **[Constrained Grammars](constrained_grammars/)** - Control model output format with BNF grammars
|
|
- **[GPU Acceleration](GPU-acceleration/)** - Optimize performance with GPU support
|
|
- **[Distributed Inference](distributed_inferencing/)** - Scale inference across multiple nodes
|
|
- **[P2P API](p2p/)** - Monitor and manage P2P worker and federated nodes
|
|
- **[Model Context Protocol (MCP)](mcp/)** - Enable agentic capabilities with MCP integration
|
|
- **[Agents](agents/)** - Autonomous AI agents with tools, knowledge base, and skills
|
|
|
|
## Specialized Features
|
|
|
|
- **[Object Detection](object-detection/)** - Detect and locate objects in images
|
|
- **[Reranker](reranker/)** - Improve retrieval accuracy with cross-encoder models
|
|
- **[Stores](stores/)** - Vector similarity search for embeddings
|
|
- **[Model Gallery](model-gallery/)** - Browse and install pre-configured models
|
|
- **[Backends](backends/)** - Learn about available backends and how to manage them
|
|
- **[Backend Monitor](backend-monitor/)** - Monitor backend status and resource usage
|
|
- **[Runtime Settings](runtime-settings/)** - Configure application settings via web UI without restarting
|
|
|
|
## Getting Started
|
|
|
|
To start using these features, make sure you have [LocalAI installed](/installation/) and have [downloaded some models](/getting-started/models/). Then explore the feature pages above to learn how to use each capability.
|