mirror of
https://github.com/bentoml/OpenLLM.git
synced 2026-02-07 06:12:12 -05:00
infra: remove tsconfig (#595)
* infra: remove tsconfig Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> * ci: auto fixes from pre-commit.ci For more information, see https://pre-commit.ci * chore: filter only ec python and jsx Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> * chore: update pnpm lock Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> * chore: run vendor Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> * chore: ignore blame Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> * chore: ignore on CI Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> --------- Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
@@ -10,5 +10,9 @@ indent_size = 2
|
||||
[openllm-client/src/openllm_client/pb/v1/*]
|
||||
indent_size = unset
|
||||
|
||||
[/node_modules/*]
|
||||
indent_size = unset
|
||||
indent_style = unset
|
||||
|
||||
[{package.json,.travis.yml,.eslintrc.json}]
|
||||
indent_style = space
|
||||
|
||||
@@ -14,3 +14,5 @@ eddbc063743b198d72c21bd7dced59dbd949b9f1
|
||||
b545ad2ad1e3acbb69f6578d8a5ee03613867505
|
||||
# 09/01/2023: ignore new line split on comma-separated item
|
||||
7d893e6cd217ddfe845210503c8f2cf1667d16b6
|
||||
# 11/09/2023: running ruff format preview
|
||||
ac377fe490bd886cf76c3855e6a2a50fc0e03b51
|
||||
|
||||
@@ -31,9 +31,11 @@ repos:
|
||||
- id: editorconfig-checker
|
||||
verbose: true
|
||||
alias: ec
|
||||
types_or: [python, javascript]
|
||||
exclude: |
|
||||
(?x)^(
|
||||
openllm-python/src/openllm/cli/entrypoint.py
|
||||
openllm-python/src/openllm/cli/entrypoint.py |
|
||||
openllm-client/src/openllm_client/pb.*
|
||||
)$
|
||||
- repo: meta
|
||||
hooks:
|
||||
|
||||
2
openllm-python/CHANGELOG.md
generated
2
openllm-python/CHANGELOG.md
generated
@@ -232,7 +232,7 @@ No significant changes.
|
||||
|
||||
- OpenLLM now include a community-maintained ClojureScript UI, Thanks @GutZuFusss
|
||||
|
||||
See [this README.md](/openllm-contrib/clojure/README.md) for more information
|
||||
See [this README.md](/external/clojure/README.md) for more information
|
||||
|
||||
OpenLLM will also include a `--cors` to enable start with cors enabled.
|
||||
[#89](https://github.com/bentoml/openllm/issues/89)
|
||||
|
||||
319
openllm-python/README.md
generated
319
openllm-python/README.md
generated
@@ -52,7 +52,7 @@ Key features include:
|
||||
|
||||
🤖️ **Bring your own LLM**: Fine-tune any LLM to suit your needs. You can load LoRA layers to fine-tune models for higher accuracy and performance for specific tasks. A unified fine-tuning API for models (`LLM.tuning()`) is coming soon.
|
||||
|
||||
⚡ **Quantization**: Run inference with less computational and memory costs though quantization techniques like [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) and [GPTQ](https://arxiv.org/abs/2210.17323).
|
||||
⚡ **Quantization**: Run inference with less computational and memory costs with quantization techniques such as [LLM.int8](https://arxiv.org/abs/2208.07339), [SpQR (int4)](https://arxiv.org/abs/2306.03078), [AWQ](https://arxiv.org/pdf/2306.00978.pdf), [GPTQ](https://arxiv.org/abs/2210.17323), and [SqueezeLLM](https://arxiv.org/pdf/2306.07629v2.pdf).
|
||||
|
||||
📡 **Streaming**: Support token streaming through server-sent events (SSE). You can use the `/v1/generate_stream` endpoint for streaming responses from LLMs.
|
||||
|
||||
@@ -106,14 +106,13 @@ Options:
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
build Package a given models into a Bento.
|
||||
build Package a given models into a BentoLLM.
|
||||
import Setup LLM interactively.
|
||||
instruct Instruct agents interactively for given tasks, from a...
|
||||
models List all supported models.
|
||||
prune Remove all saved models, (and optionally bentos) built with...
|
||||
query Ask a LLM interactively, from a terminal.
|
||||
start Start any LLM as a REST server.
|
||||
start-grpc Start any LLM as a gRPC server.
|
||||
prune Remove all saved models, (and optionally bentos) built with OpenLLM locally.
|
||||
query Query a LLM interactively, from a terminal.
|
||||
start Start a LLMServer for any supported LLM.
|
||||
start-grpc Start a gRPC LLMServer for any supported LLM.
|
||||
|
||||
Extensions:
|
||||
build-base-container Base image builder for BentoLLM.
|
||||
@@ -130,7 +129,7 @@ Extensions:
|
||||
OpenLLM allows you to quickly spin up an LLM server using `openllm start`. For example, to start an [OPT](https://huggingface.co/docs/transformers/model_doc/opt) server, run the following:
|
||||
|
||||
```bash
|
||||
openllm start opt
|
||||
openllm start facebook/opt-1.3b
|
||||
```
|
||||
|
||||
This starts the server at [http://0.0.0.0:3000/](http://0.0.0.0:3000/). OpenLLM downloads the model to the BentoML local Model Store if they have not been registered before. To view your local models, run `bentoml models list`.
|
||||
@@ -153,7 +152,7 @@ openllm query 'Explain to me the difference between "further" and "farther"'
|
||||
OpenLLM seamlessly supports many models and their variants. You can specify different variants of the model to be served by providing the `--model-id` option. For example:
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-2.7b
|
||||
openllm start facebook/opt-2.7b
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
@@ -166,6 +165,54 @@ openllm start opt --model-id facebook/opt-2.7b
|
||||
|
||||
OpenLLM currently supports the following models. By default, OpenLLM doesn't include dependencies to run all models. The extra model-specific dependencies can be installed with the instructions below.
|
||||
|
||||
<details>
|
||||
<summary>Mistral</summary>
|
||||
|
||||
### Quickstart
|
||||
|
||||
Run the following commands to quickly spin up a Llama 2 server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start HuggingFaceH4/zephyr-7b-beta
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Note that any Mistral variants can be deployed with OpenLLM.
|
||||
> Visit the [Hugging Face Model Hub](https://huggingface.co/models?sort=trending&search=mistral) to see more Mistral compatible models.
|
||||
|
||||
### Supported models
|
||||
|
||||
You can specify any of the following Mistral models by using `--model-id`.
|
||||
|
||||
- [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
||||
- [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
||||
- [amazon/MistralLite](https://huggingface.co/amazon/MistralLite)
|
||||
- [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
||||
- [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
|
||||
- Any other models that strictly follows the [MistralForCausalLM](https://huggingface.co/docs/transformers/main/en/model_doc/mistral#transformers.MistralForCausalLM) architecture
|
||||
|
||||
### Supported backends
|
||||
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start HuggingFaceH4/zephyr-7b-beta --backend pt
|
||||
```
|
||||
|
||||
- vLLM (Recommended):
|
||||
|
||||
```bash
|
||||
pip install "openllm[vllm]"
|
||||
openllm start HuggingFaceH4/zephyr-7b-beta --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Llama</summary>
|
||||
|
||||
@@ -182,7 +229,7 @@ pip install "openllm[llama]"
|
||||
Run the following commands to quickly spin up a Llama 2 server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf
|
||||
openllm start meta-llama/Llama-2-7b-chat-hf
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -225,18 +272,18 @@ You can specify any of the following Llama models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf --backend pt
|
||||
openllm start meta-llama/Llama-2-7b-chat-hf --backend pt
|
||||
```
|
||||
|
||||
- vLLM (Recommended):
|
||||
|
||||
```bash
|
||||
pip install "openllm[llama, vllm]"
|
||||
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf --backend vllm
|
||||
openllm start meta-llama/Llama-2-7b-chat-hf --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -256,7 +303,7 @@ pip install "openllm[chatglm]"
|
||||
Run the following commands to quickly spin up a ChatGLM server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start chatglm --model-id thudm/chatglm-6b
|
||||
openllm start thudm/chatglm2-6b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -277,7 +324,7 @@ You can specify any of the following ChatGLM models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start chatglm --model-id thudm/chatglm-6b --backend pt
|
||||
openllm start thudm/chatglm2-6b --backend pt
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -298,7 +345,7 @@ pip install openllm
|
||||
Run the following commands to quickly spin up a Dolly-v2 server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start dolly-v2 --model-id databricks/dolly-v2-3b
|
||||
openllm start databricks/dolly-v2-3b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -317,17 +364,17 @@ You can specify any of the following Dolly-v2 models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start dolly-v2 --model-id databricks/dolly-v2-3b --backend pt
|
||||
openllm start databricks/dolly-v2-3b --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
openllm start dolly-v2 --model-id databricks/dolly-v2-3b --backend vllm
|
||||
openllm start databricks/dolly-v2-3b --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -347,7 +394,7 @@ pip install "openllm[falcon]"
|
||||
Run the following commands to quickly spin up a Falcon server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start falcon --model-id tiiuae/falcon-7b
|
||||
openllm start tiiuae/falcon-7b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -367,18 +414,18 @@ You can specify any of the following Falcon models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start falcon --model-id tiiuae/falcon-7b --backend pt
|
||||
openllm start tiiuae/falcon-7b --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
pip install "openllm[falcon, vllm]"
|
||||
openllm start falcon --model-id tiiuae/falcon-7b --backend vllm
|
||||
openllm start tiiuae/falcon-7b --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -398,7 +445,7 @@ pip install "openllm[flan-t5]"
|
||||
Run the following commands to quickly spin up a Flan-T5 server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start flan-t5 --model-id google/flan-t5-large
|
||||
openllm start google/flan-t5-large
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -419,11 +466,11 @@ You can specify any of the following Flan-T5 models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start flan-t5 --model-id google/flan-t5-large --backend pt
|
||||
openllm start google/flan-t5-large --backend pt
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -443,7 +490,7 @@ pip install openllm
|
||||
Run the following commands to quickly spin up a GPT-NeoX server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start gpt-neox --model-id eleutherai/gpt-neox-20b
|
||||
openllm start eleutherai/gpt-neox-20b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -460,17 +507,17 @@ You can specify any of the following GPT-NeoX models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start gpt-neox --model-id eleutherai/gpt-neox-20b --backend pt
|
||||
openllm start eleutherai/gpt-neox-20b --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
openllm start gpt-neox --model-id eleutherai/gpt-neox-20b --backend vllm
|
||||
openllm start eleutherai/gpt-neox-20b --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -490,7 +537,7 @@ pip install "openllm[mpt]"
|
||||
Run the following commands to quickly spin up a MPT server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start mpt --model-id mosaicml/mpt-7b-chat
|
||||
openllm start mosaicml/mpt-7b-chat
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -513,18 +560,18 @@ You can specify any of the following MPT models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start mpt --model-id mosaicml/mpt-7b-chat --backend pt
|
||||
openllm start mosaicml/mpt-7b-chat --backend pt
|
||||
```
|
||||
|
||||
- vLLM (Recommended):
|
||||
|
||||
```bash
|
||||
pip install "openllm[mpt, vllm]"
|
||||
openllm start mpt --model-id mosaicml/mpt-7b-chat --backend vllm
|
||||
openllm start mosaicml/mpt-7b-chat --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -544,7 +591,7 @@ pip install "openllm[opt]"
|
||||
Run the following commands to quickly spin up an OPT server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-2.7b
|
||||
openllm start facebook/opt-2.7b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -566,18 +613,18 @@ You can specify any of the following OPT models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-2.7b --backend pt
|
||||
openllm start facebook/opt-2.7b --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
pip install "openllm[opt, vllm]"
|
||||
openllm start opt --model-id facebook/opt-2.7b --backend vllm
|
||||
openllm start facebook/opt-2.7b --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -597,7 +644,7 @@ pip install openllm
|
||||
Run the following commands to quickly spin up a StableLM server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start stablelm --model-id stabilityai/stablelm-tuned-alpha-7b
|
||||
openllm start stabilityai/stablelm-tuned-alpha-7b
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -617,17 +664,17 @@ You can specify any of the following StableLM models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start stablelm --model-id stabilityai/stablelm-tuned-alpha-7b --backend pt
|
||||
openllm start stabilityai/stablelm-tuned-alpha-7b --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
openllm start stablelm --model-id stabilityai/stablelm-tuned-alpha-7b --backend vllm
|
||||
openllm start stabilityai/stablelm-tuned-alpha-7b --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -647,7 +694,7 @@ pip install "openllm[starcoder]"
|
||||
Run the following commands to quickly spin up a StarCoder server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start startcoder --model-id [bigcode/starcoder](https://huggingface.co/bigcode/starcoder)
|
||||
openllm start bigcode/starcoder
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -665,18 +712,18 @@ You can specify any of the following StarCoder models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start startcoder --model-id bigcode/starcoder --backend pt
|
||||
openllm start bigcode/starcoder --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
pip install "openllm[startcoder, vllm]"
|
||||
openllm start startcoder --model-id bigcode/starcoder --backend vllm
|
||||
openllm start bigcode/starcoder --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -696,7 +743,7 @@ pip install "openllm[baichuan]"
|
||||
Run the following commands to quickly spin up a Baichuan server and send a request to it.
|
||||
|
||||
```bash
|
||||
openllm start baichuan --model-id baichuan-inc/baichuan-13b-base
|
||||
openllm start baichuan-inc/baichuan-13b-base
|
||||
export OPENLLM_ENDPOINT=http://localhost:3000
|
||||
openllm query 'What are large language models?'
|
||||
```
|
||||
@@ -718,18 +765,18 @@ You can specify any of the following Baichuan models by using `--model-id`.
|
||||
- PyTorch (Default):
|
||||
|
||||
```bash
|
||||
openllm start baichuan --model-id baichuan-inc/baichuan-13b-base --backend pt
|
||||
openllm start baichuan-inc/baichuan-13b-base --backend pt
|
||||
```
|
||||
|
||||
- vLLM:
|
||||
|
||||
```bash
|
||||
pip install "openllm[baichuan, vllm]"
|
||||
openllm start baichuan --model-id baichuan-inc/baichuan-13b-base --backend vllm
|
||||
openllm start baichuan-inc/baichuan-13b-base --backend vllm
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Currently when using the vLLM backend, quantization and adapters are not supported.
|
||||
> Currently when using the vLLM backend, adapters is yet to be supported.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -740,7 +787,7 @@ More models will be integrated with OpenLLM and we welcome your contributions if
|
||||
OpenLLM allows you to start your model server on multiple GPUs and specify the number of workers per resource assigned using the `--workers-per-resource` option. For example, if you have 4 available GPUs, you set the value as one divided by the number as only one instance of the Runner server will be spawned.
|
||||
|
||||
```bash
|
||||
openllm start opt --workers-per-resource 0.25
|
||||
openllm start facebook/opt-2.7b --workers-per-resource 0.25
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
@@ -760,7 +807,7 @@ Different LLMs may support multiple runtime implementations. Models that have `v
|
||||
To specify a specific runtime for your chosen model, use the `--backend` option. For example:
|
||||
|
||||
```bash
|
||||
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf --backend vllm
|
||||
openllm start meta-llama/Llama-2-7b-chat-hf --backend vllm
|
||||
```
|
||||
|
||||
Note:
|
||||
@@ -772,9 +819,20 @@ Note:
|
||||
|
||||
Quantization is a technique to reduce the storage and computation requirements for machine learning models, particularly during inference. By approximating floating-point numbers as integers (quantized values), quantization allows for faster computations, reduced memory footprint, and can make it feasible to deploy large models on resource-constrained devices.
|
||||
|
||||
OpenLLM supports quantization through two methods - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) and [GPTQ](https://arxiv.org/abs/2210.17323).
|
||||
OpenLLM supports the following quantization techniques
|
||||
|
||||
To run a model using the `bitsandbytes` method for quantization, you can use the following command:
|
||||
- [LLM.int8(): 8-bit Matrix Multiplication](https://arxiv.org/abs/2208.07339) through [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
|
||||
- [SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
|
||||
](https://arxiv.org/abs/2306.03078) through [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
|
||||
- [AWQ: Activation-aware Weight Quantization](https://arxiv.org/abs/2306.00978),
|
||||
- [GPTQ: Accurate Post-Training Quantization](https://arxiv.org/abs/2210.17323)
|
||||
- [SqueezeLLM: Dense-and-Sparse Quantization](https://arxiv.org/abs/2306.07629).
|
||||
|
||||
### PyTorch backend
|
||||
|
||||
With PyTorch backend, OpenLLM supports `int8`, `int4`, `gptq`
|
||||
|
||||
For using int8 and int4 quantization through `bitsandbytes`, you can use the following command:
|
||||
|
||||
```bash
|
||||
openllm start opt --quantize int8
|
||||
@@ -783,7 +841,7 @@ openllm start opt --quantize int8
|
||||
To run inference with `gptq`, simply pass `--quantize gptq`:
|
||||
|
||||
```bash
|
||||
openllm start falcon --model-id TheBloke/falcon-40b-instruct-GPTQ --quantize gptq --device 0
|
||||
openllm start TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
@@ -791,60 +849,129 @@ openllm start falcon --model-id TheBloke/falcon-40b-instruct-GPTQ --quantize gpt
|
||||
> first to install the dependency. From the GPTQ paper, it is recommended to quantized the weights before serving.
|
||||
> See [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) for more information on GPTQ quantization.
|
||||
|
||||
## 🛠️ Fine-tuning support (Experimental)
|
||||
### vLLM backend
|
||||
|
||||
With vLLM backend, OpenLLM supports `awq`, `squeezellm`
|
||||
|
||||
To run inference with `awq`, simply pass `--quantize awq`:
|
||||
|
||||
```bash
|
||||
openllm start mistral --model-id TheBloke/zephyr-7B-alpha-AWQ --quantize awq
|
||||
```
|
||||
|
||||
To run inference with `squeezellm`, simply pass `--quantize squeezellm`:
|
||||
|
||||
```bash
|
||||
openllm start squeeze-ai-lab/sq-llama-2-7b-w4-s0 --quantize squeezellm --serialization legacy
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Since both `squeezellm` and `awq` are weight-aware quantization methods, meaning the quantization is done during training, all pre-trained weights needs to get quantized before inference time. Make sure to fine compatible weights on HuggingFace Hub for your model of choice.
|
||||
|
||||
## 🛠️ Serving fine-tuning layers
|
||||
|
||||
[PEFT](https://huggingface.co/docs/peft/index), or Parameter-Efficient Fine-Tuning, is a methodology designed to fine-tune pre-trained models more efficiently. Instead of adjusting all model parameters, PEFT focuses on tuning only a subset, reducing computational and storage costs. [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) (Low-Rank Adaptation) is one of the techniques supported by PEFT. It streamlines fine-tuning by using low-rank decomposition to represent weight updates, thereby drastically reducing the number of trainable parameters.
|
||||
|
||||
With OpenLLM, you can take advantage of the fine-tuning feature by serving models with any PEFT-compatible layers using the `--adapter-id` option. For example:
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes:default
|
||||
```
|
||||
|
||||
OpenLLM also provides flexibility by supporting adapters from custom file paths:
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id /path/to/adapters
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id /path/to/adapters:local_adapter
|
||||
```
|
||||
|
||||
To use multiple adapters, use the following format:
|
||||
|
||||
```bash
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6.7b-lora --adapter-id aarnphm/opt-6.7b-lora:french_lora
|
||||
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6.7b-lora:default --adapter-id aarnphm/opt-6.7b-french:french_lora
|
||||
```
|
||||
|
||||
By default, the first specified `adapter-id` is the default LoRA layer, but optionally you can specify a different LoRA layer for inference using the `/v1/adapters` endpoint:
|
||||
By default, all adapters will be injected into the models during startup. Adapters can be specified per request via `adapter_name`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/v1/adapters --json '{"adapter_name": "vn_lora"}'
|
||||
curl -X 'POST' \
|
||||
'http://localhost:3000/v1/generate' \
|
||||
-H 'accept: application/json' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"prompt": "What is the meaning of life?",
|
||||
"stop": [
|
||||
"philosopher"
|
||||
],
|
||||
"llm_config": {
|
||||
"max_new_tokens": 256,
|
||||
"temperature": 0.75,
|
||||
"top_k": 15,
|
||||
"top_p": 1
|
||||
},
|
||||
"adapter_name": "default"
|
||||
}'
|
||||
```
|
||||
|
||||
Note that if you are using multiple adapter names and IDs, it is recommended to set the default adapter before sending the inference to avoid any performance degradation.
|
||||
|
||||
To include this into the Bento, you can specify the `--adapter-id` option when using the `openllm build` command:
|
||||
|
||||
```bash
|
||||
openllm build opt --model-id facebook/opt-6.7b --adapter-id ...
|
||||
openllm build facebook/opt-6.7b --adapter-id ...
|
||||
```
|
||||
|
||||
If you use a relative path for `--adapter-id`, you need to add `--build-ctx`.
|
||||
|
||||
```bash
|
||||
openllm build opt --adapter-id ./path/to/adapter_id --build-ctx .
|
||||
openllm build facebook/opt-6.7b --adapter-id ./path/to/adapter_id --build-ctx .
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> We will gradually roll out support for fine-tuning all models.
|
||||
> Currently, the models supporting fine-tuning with OpenLLM include: OPT, Falcon, and LlaMA.
|
||||
> [!IMPORTANT]
|
||||
> Fine-tuning support is still experimental and currently only works with PyTorch backend. vLLM support is coming soon.
|
||||
|
||||
## 🥅 Playground and Chat UI
|
||||
|
||||
The following UIs are currently available for OpenLLM:
|
||||
|
||||
| UI | Owner | Type | Progress |
|
||||
| ----------------------------------------------------------------------------------------- | -------------------------------------------- | -------------------- | -------- |
|
||||
| [Clojure](https://github.com/bentoml/OpenLLM/blob/main/openllm-contrib/clojure/README.md) | [@GutZuFusss](https://github.com/GutZuFusss) | Community-maintained | 🔧 |
|
||||
| TS | BentoML Team | | 🚧 |
|
||||
| UI | Owner | Type | Progress |
|
||||
| ---------------------------------------------------------------------------------- | -------------------------------------------- | -------------------- | -------- |
|
||||
| [Clojure](https://github.com/bentoml/OpenLLM/blob/main/external/clojure/README.md) | [@GutZuFusss](https://github.com/GutZuFusss) | Community-maintained | 🔧 |
|
||||
| TS | BentoML Team | | 🚧 |
|
||||
|
||||
## 🐍 Python SDK
|
||||
|
||||
Each LLM can be instantiated with `openllm.LLM`:
|
||||
|
||||
```python
|
||||
import openllm
|
||||
|
||||
llm = openllm.LLM('facebook/opt-2.7b')
|
||||
```
|
||||
|
||||
The main inference API is the streaming `generate_iterator` method:
|
||||
|
||||
```python
|
||||
async for generation in llm.generate_iterator('What is the meaning of life?'): print(generation.outputs[0].text)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> The motivation behind making `llm.generate_iterator` an async generator is to provide support for Continuous batching with vLLM backend. By having the async endpoints, each prompt
|
||||
> will be added correctly to the request queue to process with vLLM backend.
|
||||
|
||||
There is also a _one-shot_ `generate` method:
|
||||
|
||||
```python
|
||||
await llm.generate('What is the meaning of life?')
|
||||
```
|
||||
|
||||
This method is easy to use for one-shot generation use case, but merely served as an example how to use `llm.generate_iterator` as it uses `generate_iterator` under the hood.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> If you need to call your code in a synchronous context, you can use `asyncio.run` that wraps an async function:
|
||||
>
|
||||
> ```python
|
||||
> import asyncio
|
||||
> async def generate(prompt, **attrs): return await llm.generate(prompt, **attrs)
|
||||
> asyncio.run(generate("The meaning of life is", temperature=0.23))
|
||||
> ```
|
||||
|
||||
## ⚙️ Integrations
|
||||
|
||||
@@ -856,29 +983,23 @@ integrate with other powerful tools easily. We currently offer integration with
|
||||
|
||||
### BentoML
|
||||
|
||||
OpenLLM models can be integrated as a
|
||||
OpenLLM LLM can be integrated as a
|
||||
[Runner](https://docs.bentoml.com/en/latest/concepts/runner.html) in your
|
||||
BentoML service. These runners have a `generate` method that takes a string as a
|
||||
prompt and returns a corresponding output string. This will allow you to plug
|
||||
and play any OpenLLM models with your existing ML workflow.
|
||||
BentoML service. Simply call `await llm.generate` to generate text. Note that
|
||||
`llm.generate` uses `runner` under the hood:
|
||||
|
||||
```python
|
||||
import bentoml
|
||||
import openllm
|
||||
|
||||
model = "opt"
|
||||
llm = openllm.LLM('facebook/opt-2.7b')
|
||||
|
||||
llm_config = openllm.AutoConfig.for_model(model)
|
||||
llm_runner = openllm.Runner(model, llm_config=llm_config)
|
||||
svc = bentoml.Service(name="llm-opt-service", runners=[llm.runner])
|
||||
|
||||
svc = bentoml.Service(
|
||||
name=f"llm-opt-service", runners=[llm_runner]
|
||||
)
|
||||
|
||||
@svc.api(input=Text(), output=Text())
|
||||
@svc.api(input=bentoml.io.Text(), output=bentoml.io.Text())
|
||||
async def prompt(input_text: str) -> str:
|
||||
answer = await llm_runner.generate(input_text)
|
||||
return answer
|
||||
generation = await llm.generate(input_text)
|
||||
return generation.outputs[0].text
|
||||
```
|
||||
|
||||
### [LangChain](https://python.langchain.com/docs/ecosystem/integrations/openllm)
|
||||
@@ -950,24 +1071,6 @@ agent = transformers.HfAgent("http://localhost:3000/hf/agent") # URL that runs
|
||||
agent.run("Is the following `text` positive or negative?", text="I don't like how this models is generate inputs")
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Only `starcoder` is currently supported with Agent integration.
|
||||
> The example above was also run with four T4s on EC2 `g4dn.12xlarge`
|
||||
|
||||
If you want to use OpenLLM client to ask questions to the running agent, you can
|
||||
also do so:
|
||||
|
||||
```python
|
||||
import openllm
|
||||
|
||||
client = openllm.client.HTTPClient("http://localhost:3000")
|
||||
|
||||
client.ask_agent(
|
||||
task="Is the following `text` positive or negative?",
|
||||
text="What are you thinking about?",
|
||||
)
|
||||
```
|
||||
|
||||
<!-- hatch-fancy-pypi-readme interim stop -->
|
||||
|
||||

|
||||
@@ -983,10 +1086,10 @@ There are several ways to deploy your LLMs:
|
||||
### 🐳 Docker container
|
||||
|
||||
1. **Building a Bento**: With OpenLLM, you can easily build a Bento for a
|
||||
specific model, like `dolly-v2`, using the `build` command.:
|
||||
specific model, like `mistralai/Mistral-7B-Instruct-v0.1`, using the `build` command.:
|
||||
|
||||
```bash
|
||||
openllm build dolly-v2
|
||||
openllm build mistralai/Mistral-7B-Instruct-v0.1
|
||||
```
|
||||
|
||||
A
|
||||
@@ -1023,10 +1126,10 @@ serverless cloud for shipping and scaling AI applications.
|
||||
> specific API token and the BentoCloud endpoint respectively.
|
||||
|
||||
3. **Bulding a Bento**: With OpenLLM, you can easily build a Bento for a
|
||||
specific model, such as `dolly-v2`:
|
||||
specific model, such as `mistralai/Mistral-7B-Instruct-v0.1`:
|
||||
|
||||
```bash
|
||||
openllm build dolly-v2
|
||||
openllm build mistralai/Mistral-7B-Instruct-v0.1
|
||||
```
|
||||
|
||||
4. **Pushing a Bento**: Push your freshly-built Bento service to BentoCloud via
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import annotations
|
||||
import os
|
||||
|
||||
|
||||
model_id = os.environ['OPENLLM_MODEL_ID'] # openllm: model name
|
||||
model_tag = None # openllm: model tag
|
||||
adapter_map = os.environ['OPENLLM_ADAPTER_MAP'] # openllm: model adapter map
|
||||
|
||||
130
pnpm-lock.yaml
generated
130
pnpm-lock.yaml
generated
@@ -1,5 +1,9 @@
|
||||
lockfileVersion: '6.0'
|
||||
|
||||
settings:
|
||||
autoInstallPeers: true
|
||||
excludeLinksFromLockfile: false
|
||||
|
||||
overrides:
|
||||
vitest: ^0.27.1
|
||||
react: ^18.2.0
|
||||
@@ -31,7 +35,7 @@ importers:
|
||||
specifier: 6.8.0
|
||||
version: 6.8.0(eslint@8.53.0)(typescript@5.2.2)
|
||||
eslint:
|
||||
specifier: 8.53.0
|
||||
specifier: ^8.53.0
|
||||
version: 8.53.0
|
||||
eslint-config-prettier:
|
||||
specifier: 9.0.0
|
||||
@@ -52,7 +56,7 @@ importers:
|
||||
specifier: 48.0.1
|
||||
version: 48.0.1(eslint@8.53.0)
|
||||
prettier:
|
||||
specifier: 3.0.3
|
||||
specifier: ^3.0.3
|
||||
version: 3.0.3
|
||||
prettier-plugin-pkg:
|
||||
specifier: 0.18.0
|
||||
@@ -102,7 +106,7 @@ importers:
|
||||
devDependencies:
|
||||
'@svgr/webpack':
|
||||
specifier: '*'
|
||||
version: 8.1.0(typescript@5.1.6)
|
||||
version: 8.1.0
|
||||
'@types/dedent':
|
||||
specifier: ^0.7.1
|
||||
version: 0.7.1
|
||||
@@ -131,7 +135,7 @@ importers:
|
||||
specifier: '*'
|
||||
version: 5.1.6
|
||||
|
||||
openllm-contrib/clojure:
|
||||
external/clojure:
|
||||
dependencies:
|
||||
'@babel/runtime':
|
||||
specifier: ^7.23.2
|
||||
@@ -302,6 +306,13 @@ packages:
|
||||
'@jridgewell/gen-mapping': 0.3.3
|
||||
'@jridgewell/trace-mapping': 0.3.19
|
||||
|
||||
/@babel/code-frame@7.22.10:
|
||||
resolution: {integrity: sha512-/KKIMG4UEL35WmI9OlvMhurwtytjvXoFcGNrOvyG9zIzA8YmPjVtIZUf7b05+TPO7G7/GEmLHDaoCgACHl9hhA==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dependencies:
|
||||
'@babel/highlight': 7.22.10
|
||||
chalk: 2.4.2
|
||||
|
||||
/@babel/code-frame@7.22.13:
|
||||
resolution: {integrity: sha512-XktuhWlJ5g+3TJXc5upd9Ks1HutSArik6jf2eAjYFyIOf4ej3RN+184cZbzDvbPnuTJIUhPKKJE3cIsYTiAT3w==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
@@ -443,6 +454,12 @@ packages:
|
||||
dependencies:
|
||||
'@babel/types': 7.22.17
|
||||
|
||||
/@babel/helper-module-imports@7.22.5:
|
||||
resolution: {integrity: sha512-8Dl6+HD/cKifutF5qGd/8ZJi84QeAKh+CEe1sBzz8UayBBGg1dAIJrdHOcOM5b2MpzWL2yuotJTtGjETq0qjXg==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dependencies:
|
||||
'@babel/types': 7.22.11
|
||||
|
||||
/@babel/helper-module-transforms@7.23.0(@babel/core@7.22.17):
|
||||
resolution: {integrity: sha512-WhDWw1tdrlT0gMgUJSlX0IQvoO1eN279zrAUbVB+KpV2c3Tylz8+GnKOLllCS6Z/iZQEyVYxhZVUdPTqs2YYPw==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
@@ -522,7 +539,6 @@ packages:
|
||||
/@babel/helper-validator-identifier@7.22.5:
|
||||
resolution: {integrity: sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dev: true
|
||||
|
||||
/@babel/helper-validator-option@7.22.15:
|
||||
resolution: {integrity: sha512-bMn7RmyFjY/mdECUbgn9eoSY4vqvacUnS9i9vGAGttgFWesO6B4CYWA7XlpbWgBt71iv/hfbPlynohStqnu5hA==}
|
||||
@@ -547,11 +563,19 @@ packages:
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
/@babel/highlight@7.22.10:
|
||||
resolution: {integrity: sha512-78aUtVcT7MUscr0K5mIEnkwxPE0MaxkR5RxRwuHaQ+JuU5AmTPhY+do2mdzVTnIJJpyBglql2pehuBIWHug+WQ==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dependencies:
|
||||
'@babel/helper-validator-identifier': 7.22.5
|
||||
chalk: 2.4.2
|
||||
js-tokens: 4.0.0
|
||||
|
||||
/@babel/highlight@7.22.13:
|
||||
resolution: {integrity: sha512-C/BaXcnnvBCmHTpz/VGZ8jgtE2aYlW4hxDhseJAWZb7gqGM/qtCK6iZUb0TyKFf7BOUsBH7Q7fkRsDRhg1XklQ==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dependencies:
|
||||
'@babel/helper-validator-identifier': 7.22.20
|
||||
'@babel/helper-validator-identifier': 7.22.5
|
||||
chalk: 2.4.2
|
||||
js-tokens: 4.0.0
|
||||
|
||||
@@ -812,7 +836,7 @@ packages:
|
||||
'@babel/core': ^7.0.0-0
|
||||
dependencies:
|
||||
'@babel/core': 7.22.17
|
||||
'@babel/helper-module-imports': 7.22.15
|
||||
'@babel/helper-module-imports': 7.22.5
|
||||
'@babel/helper-plugin-utils': 7.22.5
|
||||
'@babel/helper-remap-async-to-generator': 7.22.17(@babel/core@7.22.17)
|
||||
dev: true
|
||||
@@ -1050,7 +1074,7 @@ packages:
|
||||
'@babel/helper-hoist-variables': 7.22.5
|
||||
'@babel/helper-module-transforms': 7.23.0(@babel/core@7.22.17)
|
||||
'@babel/helper-plugin-utils': 7.22.5
|
||||
'@babel/helper-validator-identifier': 7.22.20
|
||||
'@babel/helper-validator-identifier': 7.22.5
|
||||
dev: true
|
||||
|
||||
/@babel/plugin-transform-modules-umd@7.22.5(@babel/core@7.22.17):
|
||||
@@ -1548,6 +1572,14 @@ packages:
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
/@babel/types@7.22.11:
|
||||
resolution: {integrity: sha512-siazHiGuZRz9aB9NpHy9GOs9xiQPKnMzgdr493iI1M67vRXpnEq8ZOOKzezC5q7zwuQ6sDhdSp4SD9ixKSqKZg==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
dependencies:
|
||||
'@babel/helper-string-parser': 7.22.5
|
||||
'@babel/helper-validator-identifier': 7.22.5
|
||||
to-fast-properties: 2.0.0
|
||||
|
||||
/@babel/types@7.22.17:
|
||||
resolution: {integrity: sha512-YSQPHLFtQNE5xN9tHuZnzu8vPr61wVTBZdfv1meex1NBosa4iT05k/Jw06ddJugi4bk7The/oSwQGFcksmEJQg==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
@@ -1575,7 +1607,7 @@ packages:
|
||||
/@emotion/babel-plugin@11.11.0:
|
||||
resolution: {integrity: sha512-m4HEDZleaaCH+XgDDsPF15Ht6wTLsgDTeR3WYj9Q/k76JtWhrJjcP4+/XlG8LGT/Rol9qUfOIztXeA84ATpqPQ==}
|
||||
dependencies:
|
||||
'@babel/helper-module-imports': 7.22.15
|
||||
'@babel/helper-module-imports': 7.22.5
|
||||
'@babel/runtime': 7.23.2
|
||||
'@emotion/hash': 0.9.1
|
||||
'@emotion/memoize': 0.8.1
|
||||
@@ -1925,7 +1957,7 @@ packages:
|
||||
'@babel/runtime': 7.23.2
|
||||
'@floating-ui/react-dom': 2.0.2(react-dom@18.2.0)(react@18.2.0)
|
||||
'@mui/types': 7.2.4
|
||||
'@mui/utils': 5.14.10(react@18.2.0)
|
||||
'@mui/utils': 5.14.13(react@18.2.0)
|
||||
'@popperjs/core': 2.11.8
|
||||
clsx: 2.0.0
|
||||
prop-types: 15.8.1
|
||||
@@ -1977,7 +2009,7 @@ packages:
|
||||
'@mui/core-downloads-tracker': 5.14.9
|
||||
'@mui/system': 5.14.10(@emotion/react@11.11.1)(@emotion/styled@11.11.0)(react@18.2.0)
|
||||
'@mui/types': 7.2.4
|
||||
'@mui/utils': 5.14.10(react@18.2.0)
|
||||
'@mui/utils': 5.14.13(react@18.2.0)
|
||||
'@types/react-transition-group': 4.4.7
|
||||
clsx: 2.0.0
|
||||
csstype: 3.1.2
|
||||
@@ -1999,7 +2031,7 @@ packages:
|
||||
optional: true
|
||||
dependencies:
|
||||
'@babel/runtime': 7.23.2
|
||||
'@mui/utils': 5.14.10(react@18.2.0)
|
||||
'@mui/utils': 5.14.13(react@18.2.0)
|
||||
prop-types: 15.8.1
|
||||
react: 18.2.0
|
||||
dev: false
|
||||
@@ -2048,7 +2080,7 @@ packages:
|
||||
'@mui/private-theming': 5.14.10(react@18.2.0)
|
||||
'@mui/styled-engine': 5.14.10(@emotion/react@11.11.1)(@emotion/styled@11.11.0)(react@18.2.0)
|
||||
'@mui/types': 7.2.4
|
||||
'@mui/utils': 5.14.10(react@18.2.0)
|
||||
'@mui/utils': 5.14.13(react@18.2.0)
|
||||
clsx: 2.0.0
|
||||
csstype: 3.1.2
|
||||
prop-types: 15.8.1
|
||||
@@ -2163,7 +2195,7 @@ packages:
|
||||
'@mui/base': 5.0.0-beta.16(react-dom@18.2.0)(react@18.2.0)
|
||||
'@mui/material': 5.14.9(@emotion/react@11.11.1)(@emotion/styled@11.11.0)(react-dom@18.2.0)(react@18.2.0)
|
||||
'@mui/system': 5.14.10(@emotion/react@11.11.1)(@emotion/styled@11.11.0)(react@18.2.0)
|
||||
'@mui/utils': 5.14.10(react@18.2.0)
|
||||
'@mui/utils': 5.14.13(react@18.2.0)
|
||||
'@types/react-transition-group': 4.4.7
|
||||
clsx: 2.0.0
|
||||
prop-types: 15.8.1
|
||||
@@ -2506,18 +2538,17 @@ packages:
|
||||
'@svgr/babel-plugin-transform-svg-component': 8.0.0(@babel/core@7.22.17)
|
||||
dev: true
|
||||
|
||||
/@svgr/core@8.1.0(typescript@5.1.6):
|
||||
/@svgr/core@8.1.0:
|
||||
resolution: {integrity: sha512-8QqtOQT5ACVlmsvKOJNEaWmRPmcojMOzCz4Hs2BGG/toAp/K38LcsMRyLp349glq5AzJbCEeimEoxaX6v/fLrA==}
|
||||
engines: {node: '>=14'}
|
||||
dependencies:
|
||||
'@babel/core': 7.22.17
|
||||
'@svgr/babel-preset': 8.1.0(@babel/core@7.22.17)
|
||||
camelcase: 6.3.0
|
||||
cosmiconfig: 8.3.6(typescript@5.1.6)
|
||||
cosmiconfig: 8.2.0
|
||||
snake-case: 3.0.4
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
- typescript
|
||||
dev: true
|
||||
|
||||
/@svgr/hast-util-to-babel-ast@8.0.0:
|
||||
@@ -2536,28 +2567,26 @@ packages:
|
||||
dependencies:
|
||||
'@babel/core': 7.22.17
|
||||
'@svgr/babel-preset': 8.1.0(@babel/core@7.22.17)
|
||||
'@svgr/core': 8.1.0(typescript@5.1.6)
|
||||
'@svgr/core': 8.1.0
|
||||
'@svgr/hast-util-to-babel-ast': 8.0.0
|
||||
svg-parser: 2.0.4
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
dev: true
|
||||
|
||||
/@svgr/plugin-svgo@8.1.0(@svgr/core@8.1.0)(typescript@5.1.6):
|
||||
/@svgr/plugin-svgo@8.1.0(@svgr/core@8.1.0):
|
||||
resolution: {integrity: sha512-Ywtl837OGO9pTLIN/onoWLmDQ4zFUycI1g76vuKGEz6evR/ZTJlJuz3G/fIkb6OVBJ2g0o6CGJzaEjfmEo3AHA==}
|
||||
engines: {node: '>=14'}
|
||||
peerDependencies:
|
||||
'@svgr/core': '*'
|
||||
dependencies:
|
||||
'@svgr/core': 8.1.0(typescript@5.1.6)
|
||||
cosmiconfig: 8.3.6(typescript@5.1.6)
|
||||
'@svgr/core': 8.1.0
|
||||
cosmiconfig: 8.2.0
|
||||
deepmerge: 4.3.1
|
||||
svgo: 3.0.2
|
||||
transitivePeerDependencies:
|
||||
- typescript
|
||||
dev: true
|
||||
|
||||
/@svgr/webpack@8.1.0(typescript@5.1.6):
|
||||
/@svgr/webpack@8.1.0:
|
||||
resolution: {integrity: sha512-LnhVjMWyMQV9ZmeEy26maJk+8HTIbd59cH4F2MJ439k9DqejRisfFNGAPvRYlKETuh9LrImlS8aKsBgKjMA8WA==}
|
||||
engines: {node: '>=14'}
|
||||
dependencies:
|
||||
@@ -2566,12 +2595,11 @@ packages:
|
||||
'@babel/preset-env': 7.22.15(@babel/core@7.22.17)
|
||||
'@babel/preset-react': 7.22.15(@babel/core@7.22.17)
|
||||
'@babel/preset-typescript': 7.23.0(@babel/core@7.22.17)
|
||||
'@svgr/core': 8.1.0(typescript@5.1.6)
|
||||
'@svgr/core': 8.1.0
|
||||
'@svgr/plugin-jsx': 8.1.0(@svgr/core@8.1.0)
|
||||
'@svgr/plugin-svgo': 8.1.0(@svgr/core@8.1.0)(typescript@5.1.6)
|
||||
'@svgr/plugin-svgo': 8.1.0(@svgr/core@8.1.0)
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
- typescript
|
||||
dev: true
|
||||
|
||||
/@swc/helpers@0.5.1:
|
||||
@@ -2710,6 +2738,7 @@ packages:
|
||||
|
||||
/@types/prop-types@15.7.8:
|
||||
resolution: {integrity: sha512-kMpQpfZKSCBqltAJwskgePRaYRFukDkm1oItcAbC3gNELR20XIBcN9VRgg4+m8DKsTfkWeA4m4Imp4DDuWy7FQ==}
|
||||
dev: false
|
||||
|
||||
/@types/react-dom@18.0.6:
|
||||
resolution: {integrity: sha512-/5OFZgfIPSwy+YuIBP/FgJnQnsxhZhjjrnxudMddeblOouIodEQ75X14Rr4wGSG/bknL+Omy9iWlLo1u/9GzAA==}
|
||||
@@ -2720,7 +2749,7 @@ packages:
|
||||
/@types/react-transition-group@4.4.7:
|
||||
resolution: {integrity: sha512-ICCyBl5mvyqYp8Qeq9B5G/fyBSRC0zx3XM3sCC6KkcMsNeAHqXBKkmat4GqdJET5jtYUpZXrxI5flve5qhi2Eg==}
|
||||
dependencies:
|
||||
'@types/react': 18.2.35
|
||||
'@types/react': 18.2.20
|
||||
|
||||
/@types/react@18.2.20:
|
||||
resolution: {integrity: sha512-WKNtmsLWJM/3D5mG4U84cysVY31ivmyw85dE84fOCk5Hx78wezB/XEjVPWl2JTZ5FkEeaTJf+VgUAUn3PE7Isw==}
|
||||
@@ -2728,12 +2757,11 @@ packages:
|
||||
'@types/prop-types': 15.7.5
|
||||
'@types/scheduler': 0.16.3
|
||||
csstype: 3.1.2
|
||||
dev: true
|
||||
|
||||
/@types/react@18.2.35:
|
||||
resolution: {integrity: sha512-LG3xpFZ++rTndV+/XFyX5vUP7NI9yxyk+MQvBDq+CVs8I9DLSc3Ymwb1Vmw5YDoeNeHN4PDZa3HylMKJYT9PNQ==}
|
||||
dependencies:
|
||||
'@types/prop-types': 15.7.8
|
||||
'@types/prop-types': 15.7.5
|
||||
'@types/scheduler': 0.16.3
|
||||
csstype: 3.1.2
|
||||
|
||||
@@ -3348,7 +3376,7 @@ packages:
|
||||
/call-bind@1.0.2:
|
||||
resolution: {integrity: sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==}
|
||||
dependencies:
|
||||
function-bind: 1.1.2
|
||||
function-bind: 1.1.1
|
||||
get-intrinsic: 1.2.1
|
||||
dev: true
|
||||
|
||||
@@ -3579,20 +3607,14 @@ packages:
|
||||
yaml: 1.10.2
|
||||
dev: false
|
||||
|
||||
/cosmiconfig@8.3.6(typescript@5.1.6):
|
||||
resolution: {integrity: sha512-kcZ6+W5QzcJ3P1Mt+83OUv/oHFqZHIx8DuxG6eZ5RGMERoLqp4BuGjhHLYGK+Kf5XVkQvqBSmAy/nGWN3qDgEA==}
|
||||
/cosmiconfig@8.2.0:
|
||||
resolution: {integrity: sha512-3rTMnFJA1tCOPwRxtgF4wd7Ab2qvDbL8jX+3smjIbS4HlZBagTlpERbdN7iAbWlrfxE3M8c27kTwTawQ7st+OQ==}
|
||||
engines: {node: '>=14'}
|
||||
peerDependencies:
|
||||
typescript: '>=4.9.5'
|
||||
peerDependenciesMeta:
|
||||
typescript:
|
||||
optional: true
|
||||
dependencies:
|
||||
import-fresh: 3.3.0
|
||||
js-yaml: 4.1.0
|
||||
parse-json: 5.2.0
|
||||
path-type: 4.0.0
|
||||
typescript: 5.1.6
|
||||
dev: true
|
||||
|
||||
/create-ecdh@4.0.4:
|
||||
@@ -4353,7 +4375,7 @@ packages:
|
||||
define-properties: 1.2.0
|
||||
es-abstract: 1.22.1
|
||||
es-set-tostringtag: 2.0.1
|
||||
function-bind: 1.1.2
|
||||
function-bind: 1.1.1
|
||||
get-intrinsic: 1.2.1
|
||||
globalthis: 1.0.3
|
||||
has-property-descriptors: 1.0.0
|
||||
@@ -4422,7 +4444,7 @@ packages:
|
||||
resolution: {integrity: sha512-WFj2isz22JahUv+B788TlO3N6zL3nNJGU8CcZbPZvVEkBPaJdCV4vy5wyghty5ROFbCRnm132v8BScu5/1BQ8g==}
|
||||
dependencies:
|
||||
debug: 3.2.7
|
||||
is-core-module: 2.13.1
|
||||
is-core-module: 2.13.0
|
||||
resolve: 1.22.4
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
@@ -4901,8 +4923,12 @@ packages:
|
||||
requiresBuild: true
|
||||
optional: true
|
||||
|
||||
/function-bind@1.1.1:
|
||||
resolution: {integrity: sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==}
|
||||
|
||||
/function-bind@1.1.2:
|
||||
resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==}
|
||||
dev: true
|
||||
|
||||
/function.prototype.name@1.1.5:
|
||||
resolution: {integrity: sha512-uN7m/BzVKQnCUF/iW8jYea67v++2u7m5UgENbHRtdDVclOUP+FMPlCNdmk0h/ysGyo2tavMJEDqJAkJdRa1vMA==}
|
||||
@@ -4930,7 +4956,7 @@ packages:
|
||||
/get-intrinsic@1.2.1:
|
||||
resolution: {integrity: sha512-2DcsyfABl+gVHEfCOaTrWgyt+tb6MSEGmKq+kI5HwLbIYgjgmMcV8KQ41uaKz1xxUcn9tJtgFbQUEVcEbd0FYw==}
|
||||
dependencies:
|
||||
function-bind: 1.1.2
|
||||
function-bind: 1.1.1
|
||||
has: 1.0.3
|
||||
has-proto: 1.0.1
|
||||
has-symbols: 1.0.3
|
||||
@@ -5128,8 +5154,7 @@ packages:
|
||||
resolution: {integrity: sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==}
|
||||
engines: {node: '>= 0.4.0'}
|
||||
dependencies:
|
||||
function-bind: 1.1.2
|
||||
dev: true
|
||||
function-bind: 1.1.1
|
||||
|
||||
/hash-base@3.1.0:
|
||||
resolution: {integrity: sha512-1nmYp/rhMDiE7AYkDw+lLwlAzz0AntGIe51F3RfFfEqyQ3feY2eI/NcwC6umIQVOASPMsWJLJScWKSSvzL9IVA==}
|
||||
@@ -5161,6 +5186,7 @@ packages:
|
||||
engines: {node: '>= 0.4'}
|
||||
dependencies:
|
||||
function-bind: 1.1.2
|
||||
dev: true
|
||||
|
||||
/hast-util-from-dom@5.0.0:
|
||||
resolution: {integrity: sha512-d6235voAp/XR3Hh5uy7aGLbM3S4KamdW0WEgOaU1YoewnuYw4HXb5eRtv9g65m/RFGEfUY1Mw4UqCc5Y8L4Stg==}
|
||||
@@ -5468,10 +5494,16 @@ packages:
|
||||
engines: {node: '>= 0.4'}
|
||||
dev: true
|
||||
|
||||
/is-core-module@2.13.0:
|
||||
resolution: {integrity: sha512-Z7dk6Qo8pOCp3l4tsX2C5ZVas4V+UxwQodwZhLopL91TX8UyyHEXafPcyoeeWuLrwzHcr3igO78wNLwHJHsMCQ==}
|
||||
dependencies:
|
||||
has: 1.0.3
|
||||
|
||||
/is-core-module@2.13.1:
|
||||
resolution: {integrity: sha512-hHrIjvZsftOsvKSn2TRYl63zvxsgE0K+0mYMoH6gD4omR5IWB2KynivBQczo3+wF1cCkjzvptnI9Q0sPU66ilw==}
|
||||
dependencies:
|
||||
hasown: 2.0.0
|
||||
dev: true
|
||||
|
||||
/is-date-object@1.0.5:
|
||||
resolution: {integrity: sha512-9YQaSxsAiSwcvS33MBk3wTCVnWK+HhF8VZR2jRxehM16QcVOdHqPn4VPHmRK4lSr38n9JriurInLcP90xsYNfQ==}
|
||||
@@ -7041,7 +7073,7 @@ packages:
|
||||
resolution: {integrity: sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==}
|
||||
engines: {node: '>=8'}
|
||||
dependencies:
|
||||
'@babel/code-frame': 7.22.13
|
||||
'@babel/code-frame': 7.22.10
|
||||
error-ex: 1.3.2
|
||||
json-parse-even-better-errors: 2.3.1
|
||||
lines-and-columns: 1.2.4
|
||||
@@ -7987,7 +8019,7 @@ packages:
|
||||
resolution: {integrity: sha512-PXNdCiPqDqeUou+w1C2eTQbNfxKSuMxqTCuvlmmMsk1NWHL5fRrhY6Pl0qEYYc6+QqGClco1Qj8XnjPego4wfg==}
|
||||
hasBin: true
|
||||
dependencies:
|
||||
is-core-module: 2.13.1
|
||||
is-core-module: 2.13.0
|
||||
path-parse: 1.0.7
|
||||
supports-preserve-symlinks-flag: 1.0.0
|
||||
|
||||
@@ -7995,7 +8027,7 @@ packages:
|
||||
resolution: {integrity: sha512-iMDbmAWtfU+MHpxt/I5iWI7cY6YVEZUQ3MBgPQ++XD1PELuJHIl82xBmObyP2KyQmkNB2dsqF7seoQQiAn5yDQ==}
|
||||
hasBin: true
|
||||
dependencies:
|
||||
is-core-module: 2.13.1
|
||||
is-core-module: 2.13.0
|
||||
path-parse: 1.0.7
|
||||
supports-preserve-symlinks-flag: 1.0.0
|
||||
dev: true
|
||||
@@ -9241,7 +9273,3 @@ packages:
|
||||
/zwitch@2.0.4:
|
||||
resolution: {integrity: sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==}
|
||||
dev: false
|
||||
|
||||
settings:
|
||||
autoInstallPeers: true
|
||||
excludeLinksFromLockfile: false
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"allowJs": true,
|
||||
"declaration": true,
|
||||
"lib": ["esnext", "dom", "dom.Iterable"],
|
||||
"experimentalDecorators": true,
|
||||
"module": "esnext",
|
||||
"target": "esnext",
|
||||
"moduleResolution": "node",
|
||||
"strict": true,
|
||||
"incremental": true,
|
||||
"resolveJsonModule": true,
|
||||
"outDir": "./dist",
|
||||
"skipLibCheck": true,
|
||||
"stripInternal": true,
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"jsx": "preserve",
|
||||
"noEmit": true,
|
||||
"esModuleInterop": true,
|
||||
"isolatedModules": true
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user