hao-ai-lab / LookaheadDecoding

[ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
https://arxiv.org/abs/2402.02057
Apache License 2.0
1.15k stars 67 forks source link

Broken links in ReadMe #4

Closed bilal-aamer closed 1 year ago

bilal-aamer commented 1 year ago

Looking to Use In Your Own Code and noticed it wasn't referencing the right section, the same issue for Inference and Install From The Source in L103-L105