pratyushasharma / laser

The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
https://pratyushasharma.github.io/laser/
MIT License
361 stars 26 forks source link

Feature Request for Upcoming Refactoring #9

Open dkmisra opened 8 months ago

dkmisra commented 8 months ago

This is the issue that contains list of all features for the upcoming refactoring:

  1. A unified abstract class that does all common stuff like create command line arguments, make an LLM, and run the experiment. We may have only 1 file per LLM (or per LLM type) and this abstract class. We may not be able to get it down to a single file since certain LLMs like Roberta which are really Masked Language Models have a different procedure to computing accuracy and log-loss using the tokens.

  2. Replace the use of rate with ρ which is used in the paper.

  3. Add a feature to support memory reduction by storing separate U, S, V matrices rather than multiplying them back and loosing the memory advantage.

  4. Add more LLMs, specifically, Mistral and other Llama2 versions and Phi models.

  5. Release LLMs with optimally chosen reductions from Table3 of the paper https://arxiv.org/pdf/2312.13558.pdf.

If you have more requests, please paste them below. Do note that the first version of refactoring may not be able to do all of the above, but we'll do our best. We welcome PRs.