mirror of
https://github.com/bentoml/OpenLLM.git
synced 2026-04-20 23:18:16 -04:00
@@ -1,6 +1,6 @@
|
||||
<div align="center">
|
||||
|
||||
<h1>🦾 OpenLLM: Self-Hosting LLMs Made Easy</h1>
|
||||
</div>
|
||||
|
||||
[](https://github.com/bentoml/OpenLLM/blob/main/LICENSE)
|
||||
[](https://pypi.org/project/openllm)
|
||||
@@ -8,6 +8,8 @@
|
||||
[](https://twitter.com/bentomlai)
|
||||
[](https://l.bentoml.com/join-slack)
|
||||
|
||||
</div>
|
||||
|
||||
OpenLLM allows developers to run **any open-source LLMs** (Llama 3.3, Qwen2.5, Phi3 and [more](#supported-models)) or **custom models** as **OpenAI-compatible APIs** with a single command. It features a [built-in chat UI](#chat-ui), state-of-the-art inference backends, and a simplified workflow for creating enterprise-grade cloud deployment with Docker, Kubernetes, and [BentoCloud](#deploy-to-bentocloud).
|
||||
|
||||
Understand the [design philosophy of OpenLLM](https://www.bentoml.com/blog/from-ollama-to-openllm-running-llms-in-the-cloud).
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
<div align="center">
|
||||
|
||||
<h1>🦾 OpenLLM: Self-Hosting LLMs Made Easy</h1>
|
||||
</div>
|
||||
|
||||
[](https://github.com/bentoml/OpenLLM/blob/main/LICENSE)
|
||||
[](https://pypi.org/project/openllm)
|
||||
@@ -8,6 +8,8 @@
|
||||
[](https://twitter.com/bentomlai)
|
||||
[](https://l.bentoml.com/join-slack)
|
||||
|
||||
</div>
|
||||
|
||||
OpenLLM allows developers to run **any open-source LLMs** (Llama 3.3, Qwen2.5, Phi3 and [more](#supported-models)) or **custom models** as **OpenAI-compatible APIs** with a single command. It features a [built-in chat UI](#chat-ui), state-of-the-art inference backends, and a simplified workflow for creating enterprise-grade cloud deployment with Docker, Kubernetes, and [BentoCloud](#deploy-to-bentocloud).
|
||||
|
||||
Understand the [design philosophy of OpenLLM](https://www.bentoml.com/blog/from-ollama-to-openllm-running-llms-in-the-cloud).
|
||||
|
||||
Reference in New Issue
Block a user