diff --git a/README.md b/README.md
index 67d9e0c0..51a467ae 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,6 @@
-# 🦾 OpenLLM: Self-Hosting LLMs Made Easy
+
+
+ | Model |
+ Parameters |
+ Required GPU |
+ Start a Server |
+
+
+ | deepseek-r1 |
+ 671B |
+ 80Gx16 |
+ openllm serve deepseek-r1:671b-fc3d |
+
+
+ | deepseek-r1-distill |
+ 14B |
+ 80G |
+ openllm serve deepseek-r1-distill:qwen2.5-14b-98a9 |
+
+
+ | deepseek-v3 |
+ 671B |
+ 80Gx16 |
+ openllm serve deepseek-v3:671b-instruct-d7ec |
+
+
+ | gemma2 |
+ 2B |
+ 12G |
+ openllm serve gemma2:2b-instruct-747d |
+
+
+ | llama3.1 |
+ 8B |
+ 24G |
+ openllm serve llama3.1:8b-instruct-3c0c |
+
+
+ | llama3.2 |
+ 1B |
+ 24G |
+ openllm serve llama3.2:1b-instruct-f041 |
+
+
+ | llama3.3 |
+ 70B |
+ 80Gx2 |
+ openllm serve llama3.3:70b-instruct-b850 |
+
+
+ | mistral |
+ 8B |
+ 24G |
+ openllm serve mistral:8b-instruct-50e8 |
+
+
+ | mistral-large |
+ 123B |
+ 80Gx4 |
+ openllm serve mistral-large:123b-instruct-1022 |
+
+
+ | mistralai |
+ 24B |
+ 80G |
+ openllm serve mistralai:24b-small-instruct-2501-0e69 |
+
+
+ | mixtral |
+ 7B |
+ 80Gx2 |
+ openllm serve mixtral:8x7b-instruct-v0.1-b752 |
+
+
+ | phi4 |
+ 14B |
+ 80G |
+ openllm serve phi4:14b-c12d |
+
+
+ | pixtral |
+ 12B |
+ 80G |
+ openllm serve pixtral:12b-240910-c344 |
+
+
+ | qwen2.5 |
+ 7B |
+ 24G |
+ openllm serve qwen2.5:7b-instruct-3260 |
+
+
+ | qwen2.5-coder |
+ 7B |
+ 24G |
+ openllm serve qwen2.5-coder:7b-instruct-e75d |
+
+
+ | qwen2.5vl |
+ 3B |
+ 24G |
+ openllm serve qwen2.5vl:3b-instruct-4686 |
+
+
...
@@ -46,15 +142,16 @@ To start an LLM server locally, use the `openllm serve` command and specify the
> [!NOTE]
> OpenLLM does not store model weights. A Hugging Face token (HF_TOKEN) is required for gated models.
+>
> 1. Create your Hugging Face token [here](https://huggingface.co/settings/tokens).
-> 2. Request access to the gated model, such as [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
+> 2. Request access to the gated model, such as [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
> 3. Set your token as an environment variable by running:
> ```bash
> export HF_TOKEN=