microsoft / MInference

To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filling on an A100 while maintaining accuracy.
https://aka.ms/MInference
MIT License
573 stars 20 forks source link

Prerelease(MInference): update version #15

Closed iofu728 closed 2 weeks ago

iofu728 commented 2 weeks ago