diff --git a/docs/content/features/openai-realtime.md b/docs/content/features/openai-realtime.md index 4a0d47d0d..57a7fe597 100644 --- a/docs/content/features/openai-realtime.md +++ b/docs/content/features/openai-realtime.md @@ -4,8 +4,6 @@ title: "Realtime API" weight: 60 --- -# Realtime API - LocalAI supports the [OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime) which enables low-latency, multi-modal conversations (voice and text) over WebSocket. To use the Realtime API, you need to configure a pipeline model that defines the components for Voice Activity Detection (VAD), Transcription (STT), Language Model (LLM), and Text-to-Speech (TTS).