hao-ai-lab / LookaheadDecoding

[ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
https://arxiv.org/abs/2402.02057
Apache License 2.0
1.15k stars 67 forks source link

Is there a plan to rebuild the code in a clear style? #9

Closed hhhh12345678 closed 11 months ago

hhhh12345678 commented 1 year ago

A brilliant idea! but coding with clear logic and complete annotations would be even more perfect~

Viol2000 commented 1 year ago

Thank you for your constructive feedback! I agree that clearer logic and better annotations can enhance the project. I'll work on improving these aspects in future updates.