From bb7dfb466d69cb48d8692b941892fedfc2906cc0 Mon Sep 17 00:00:00 2001 From: Aaron Pham Date: Tue, 1 Apr 2025 14:11:25 -0400 Subject: [PATCH] chore: update instructions for deploy with openllm (#1166) Signed-off-by: Aaron Pham --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index f6335ea5..6a8e2a52 100644 --- a/README.md +++ b/README.md @@ -122,7 +122,6 @@ OpenLLM supports a wide range of state-of-the-art open-source LLMs. You can also - For the full model list, see the [OpenLLM models repository](https://github.com/bentoml/openllm-models). ## Start an LLM server @@ -252,7 +251,7 @@ OpenLLM supports LLM cloud deployment via BentoML, the unified model serving fra [Sign up for BentoCloud](https://www.bentoml.com/) for free and [log in](https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html). Then, run `openllm deploy` to deploy a model to BentoCloud: ```bash -openllm deploy llama3.2:1b +openllm deploy llama3.2:1b --env HF_TOKEN ``` > [!NOTE]