Commit Graph

26 Commits

Author SHA1 Message Date
Aaron Pham
dfca956fad feat: serve adapter layers (#52) 2023-06-23 10:07:15 -04:00
Aaron Pham
03758a5487 fix(tools): adhere to style guidelines (#31) 2023-06-18 20:03:17 -04:00
Aaron Pham
4fcd7c8ac9 integration: HuggingFace Agent (#29)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-18 00:13:53 -04:00
Aaron Pham
6f724416c0 perf: build quantization and better transformer behaviour (#28)
Fixes quantization_config and low_cpu_mem_usage to be available on PyTorch implementation only

See changelog for more details on #28
2023-06-17 08:56:14 -04:00
Aaron Pham
ded8a9f809 feat: quantization (#27) 2023-06-16 18:10:50 -04:00
Aaron
111d205f63 perf: faster LLM loading
using attrs for faster class creation opposed to metaclass

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-14 01:36:42 -04:00
aarnphm-ec2-dev
81d46ca211 feat(type): support annotations
openllm.LLM now supports fully typed-strict

openllm.LLM[ModelType, TokenizerType] -> self.model, self.tokenizer

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-06-11 14:58:17 +00:00
aarnphm-ec2-dev
2e453fb005 refactor(configuration): __config__ and perf
move model_ids and default_id to config class declaration,
cleanup dependencies between config and LLM implementation

lazy load module during LLM creation to llm_post_init

fix post_init hooks to run load_in_mha.

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-06-11 12:53:15 +00:00
aarnphm-ec2-dev
204a7ab7c9 revert(starcoder): quant 8
revert 2348946ada

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-06-10 23:17:42 +00:00
Aaron
05fa34f9e6 refactor: pretrained => model_id
I think model_id makes more sense than calling it pretrained

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-10 17:36:02 -04:00
aarnphm-ec2-dev
2348946ada fix(starcoder): disable quant 8
Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-06-10 10:01:43 +00:00
Aaron
afddaed08c fix(perf): respect per request information
remove use_default_prompt_template options

add pretrained to list of start help docstring

fix flax generation config

improve flax and tensorflow implementation

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-10 02:14:13 -04:00
Aaron
c0418b76ec feat(infra): add tools for managing optional-dependencies
based on llm config

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-08 08:57:19 -04:00
aarnphm-ec2-dev
e9e12a66a8 fix(falcon): custom load
This has to do with pipeline load is pretty magical and broken
on transformers

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-06-08 09:03:34 +00:00
Aaron
8823c70e5a chore: rename variants to pretrained for consistency
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-06 18:45:45 -04:00
Aaron
5a09b11519 refactor: implement a new interface for processing parameters
add documentation for fields

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-06-03 21:46:37 -07:00
Aaron Pham
01517e37c6 migration: attrs (#7)
Move configuration to attrs

Depends on https://github.com/bentoml/BentoML/pull/3906
2023-05-30 11:59:21 -07:00
Aaron
41706eee5b feat(save_model): passing tag
update with upstream bentoml

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-28 17:04:15 -07:00
Aaron
0df8d8b9a6 perf: reduce unecessary object creation for config class
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-28 05:22:22 -07:00
Aaron
52d65f999f feat(telemetry): add support for usage tracking
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-27 20:39:13 -07:00
Aaron
fa895c329c feat: pre-commit setup
also sync JS release with Python version

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-27 06:54:22 -07:00
Aaron
c73732db6f fix(configuration): Make sure GenerationInput dumped the correct
dictionary for llm_config

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-27 01:01:32 -07:00
aarnphm-ec2-dev
8ee5b048f3 feat(client): Async and Sync client
Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-26 22:51:21 -07:00
Aaron
b7f3a10910 refactor: migrate __init_subclass__ to Metaclass
LLMMetaclass will now responsible for generate internal attributes

add llm_type and identifying_params to Runnable class

subclass of openllm.LLM now can set a class attribute
__openllm_internal__ to let openllm knows that this is an internal class
implementation, instead of providing a _internal in the class
initialization.

support for preprocess_parameters and postprocess_parameters on client
side for better client UX

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-05-27 03:09:45 +00:00
aarnphm-ec2-dev
4127961c5c feat: openllm.client
Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-05-26 07:17:28 +00:00
aarnphm-ec2-dev
5c416fa218 feat: StarCoder
Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
2023-05-25 16:22:07 -07:00