Files
OpenLLM/typings/vllm/cache_ops.pyi
aarnphm-ec2-dev 820b4991fa chore(stubs): add generated for auto-gptq and vllm [skip ci]
This is to help with working on CPU machine

Signed-off-by: aarnphm-ec2-dev <29749331+aarnphm@users.noreply.github.com>
2023-08-03 02:28:24 +00:00

10 lines
530 B
Python
Generated

from typing import Dict
from typing import List
import torch
def copy_blocks(arg0: List[torch.Tensor], arg1: List[torch.Tensor], arg2: Dict[int,List[int]]) -> None: ...
def gather_cached_kv(arg0: torch.Tensor, arg1: torch.Tensor, arg2: torch.Tensor, arg3: torch.Tensor, arg4: torch.Tensor) -> None: ...
def reshape_and_cache(arg0: torch.Tensor, arg1: torch.Tensor, arg2: torch.Tensor, arg3: torch.Tensor, arg4: torch.Tensor) -> None: ...
def swap_blocks(arg0: torch.Tensor, arg1: torch.Tensor, arg2: Dict[int,int]) -> None: ...