diff --git a/README.md b/README.md
index 05e3b496..f6335ea5 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
+
🦾 OpenLLM: Self-Hosting LLMs Made Easy
-
[](https://github.com/bentoml/OpenLLM/blob/main/LICENSE)
[](https://pypi.org/project/openllm)
@@ -8,6 +8,8 @@
[](https://twitter.com/bentomlai)
[](https://l.bentoml.com/join-slack)
+
+
OpenLLM allows developers to run **any open-source LLMs** (Llama 3.3, Qwen2.5, Phi3 and [more](#supported-models)) or **custom models** as **OpenAI-compatible APIs** with a single command. It features a [built-in chat UI](#chat-ui), state-of-the-art inference backends, and a simplified workflow for creating enterprise-grade cloud deployment with Docker, Kubernetes, and [BentoCloud](#deploy-to-bentocloud).
Understand the [design philosophy of OpenLLM](https://www.bentoml.com/blog/from-ollama-to-openllm-running-llms-in-the-cloud).
diff --git a/README.md.tpl b/README.md.tpl
index 0223cdba..c5ef9986 100644
--- a/README.md.tpl
+++ b/README.md.tpl
@@ -1,6 +1,6 @@
+
🦾 OpenLLM: Self-Hosting LLMs Made Easy
-
[](https://github.com/bentoml/OpenLLM/blob/main/LICENSE)
[](https://pypi.org/project/openllm)
@@ -8,6 +8,8 @@
[](https://twitter.com/bentomlai)
[](https://l.bentoml.com/join-slack)
+
+
OpenLLM allows developers to run **any open-source LLMs** (Llama 3.3, Qwen2.5, Phi3 and [more](#supported-models)) or **custom models** as **OpenAI-compatible APIs** with a single command. It features a [built-in chat UI](#chat-ui), state-of-the-art inference backends, and a simplified workflow for creating enterprise-grade cloud deployment with Docker, Kubernetes, and [BentoCloud](#deploy-to-bentocloud).
Understand the [design philosophy of OpenLLM](https://www.bentoml.com/blog/from-ollama-to-openllm-running-llms-in-the-cloud).