mit-han-lab / TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library
https://mit-han-lab.github.io/TinyChatEngine/
MIT License
647 stars 62 forks source link

int4 Intel/M1 kernels #3

Closed meenchen closed 1 year ago

meenchen commented 1 year ago

POC to support int4 llama. Note we should use a more elegant in the future, once we have a stable implementation.