fix(baichuan): supported from baichuan 2 from now on. (#728)

* config support multiple architectures

* chore: only support baichuan2 from now on

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>

* chore: update notes

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>

* chore: run script [skip ci]

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>

---------

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
Co-authored-by: Aaron <29749331+aarnphm@users.noreply.github.com>
This commit is contained in:
MingLiangDai
2023-11-24 02:07:06 -05:00
committed by GitHub
parent 39ecc73a50
commit 7b8d9024c4
5 changed files with 18 additions and 24 deletions

View File

@@ -226,12 +226,10 @@ openllm query 'What are large language models?'
You can specify any of the following Baichuan models via `openllm start`:
- [baichuan-inc/baichuan-7b](https://huggingface.co/baichuan-inc/baichuan-7b)
- [baichuan-inc/baichuan-13b-base](https://huggingface.co/baichuan-inc/baichuan-13b-base)
- [baichuan-inc/baichuan-13b-chat](https://huggingface.co/baichuan-inc/baichuan-13b-chat)
- [fireballoon/baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b)
- [fireballoon/baichuan-vicuna-7b](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
- [hiyouga/baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft)
- [baichuan-inc/baichuan2-7b-base](https://huggingface.co/baichuan-inc/baichuan2-7b-base)
- [baichuan-inc/baichuan2-7b-chat](https://huggingface.co/baichuan-inc/baichuan2-7b-chat)
- [baichuan-inc/baichuan2-13b-base](https://huggingface.co/baichuan-inc/baichuan2-13b-base)
- [baichuan-inc/baichuan2-13b-chat](https://huggingface.co/baichuan-inc/baichuan2-13b-chat)
### Supported backends
@@ -249,7 +247,7 @@ OpenLLM will support vLLM and PyTorch as default backend. By default, it will us
To install vLLM, run `pip install "openllm[vllm]"`
```bash
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend vllm
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan2-7b-base --backend vllm
```
@@ -264,7 +262,7 @@ TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend vllm
```bash
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend pt
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan2-7b-base --backend pt
```
</details>

View File

@@ -0,0 +1 @@
Only baichuan2 and baichuan3 are supported. We dropped baichuan 1 support

View File

@@ -42,7 +42,6 @@ CONFIG_MAPPING_NAMES = OrderedDict(
)
)
class _LazyConfigMapping(OrderedDictType, ReprMixin):
def __init__(self, mapping: OrderedDict[LiteralString, LiteralString]):
self._mapping = mapping

View File

@@ -21,19 +21,17 @@ class BaichuanConfig(openllm_core.LLMConfig):
'url': 'https://github.com/baichuan-inc/Baichuan-7B',
'requirements': ['cpm-kernels'],
'backend': ('pt', 'vllm'),
'architecture': 'BaiChuanForCausalLM',
'architecture': 'BaichuanForCausalLM',
# NOTE: See the following
# https://huggingface.co/baichuan-inc/Baichuan-13B-Chat/blob/19ef51ba5bad8935b03acd20ff04a269210983bc/modeling_baichuan.py#L555
# https://huggingface.co/baichuan-inc/Baichuan-13B-Chat/blob/main/generation_config.json
# https://github.com/baichuan-inc/Baichuan-13B/issues/25
'default_id': 'baichuan-inc/baichuan-7b',
'model_ids': [
'baichuan-inc/baichuan-7b',
'baichuan-inc/baichuan-13b-base',
'baichuan-inc/baichuan-13b-chat',
'fireballoon/baichuan-vicuna-chinese-7b',
'fireballoon/baichuan-vicuna-7b',
'hiyouga/baichuan-7b-sft',
'baichuan-inc/baichuan2-7b-base',
'baichuan-inc/baichuan2-7b-chat',
'baichuan-inc/baichuan2-13b-base',
'baichuan-inc/baichuan2-13b-chat',
],
}

View File

@@ -226,12 +226,10 @@ openllm query 'What are large language models?'
You can specify any of the following Baichuan models via `openllm start`:
- [baichuan-inc/baichuan-7b](https://huggingface.co/baichuan-inc/baichuan-7b)
- [baichuan-inc/baichuan-13b-base](https://huggingface.co/baichuan-inc/baichuan-13b-base)
- [baichuan-inc/baichuan-13b-chat](https://huggingface.co/baichuan-inc/baichuan-13b-chat)
- [fireballoon/baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b)
- [fireballoon/baichuan-vicuna-7b](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
- [hiyouga/baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft)
- [baichuan-inc/baichuan2-7b-base](https://huggingface.co/baichuan-inc/baichuan2-7b-base)
- [baichuan-inc/baichuan2-7b-chat](https://huggingface.co/baichuan-inc/baichuan2-7b-chat)
- [baichuan-inc/baichuan2-13b-base](https://huggingface.co/baichuan-inc/baichuan2-13b-base)
- [baichuan-inc/baichuan2-13b-chat](https://huggingface.co/baichuan-inc/baichuan2-13b-chat)
### Supported backends
@@ -249,7 +247,7 @@ OpenLLM will support vLLM and PyTorch as default backend. By default, it will us
To install vLLM, run `pip install "openllm[vllm]"`
```bash
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend vllm
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan2-7b-base --backend vllm
```
@@ -264,7 +262,7 @@ TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend vllm
```bash
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan-7b --backend pt
TRUST_REMOTE_CODE=True openllm start baichuan-inc/baichuan2-7b-base --backend pt
```
</details>