Closed yilunzhao closed 1 year ago
- Do you have to update the huggingface transformers library version?
Yes. There has not been an official release that supports LLAMA, so I installed the transformer library from source.
pip install git+https://github.com/huggingface/transformers
- On what device/GPU did you test the mathqa setting?
The experimental setting was recorded in this google doc. I will use this doc for updates.
To reiterate the action items discussed in the meeting:
Also I was wondering if this PR is ready to merge? You also mentioned that there are some edge cases that haven't been handled?
- add the command to install the library so we can replicate the results
I just updated the requirements.txt
in the second commit.
Also I was wondering if this PR is ready to merge? You also mentioned that there are some edge cases that haven't been handled?
Yes. LLAMA, Alpaca, and santacoder should work fine using the config file: /home/lily/yl2465/code/NLP4Code/finetuning/training_configs/few_shot/mathqa-8_fixed_mathqa_shots_llama.yaml
@yilunzhao Can you resolve the conflicts and also merge from main to run the CI tests?
Great, merging this PR now
@yilunzhao Check my comments on #46
Since we can't reopen a merged PR, can you submit a new PR and point it to this PR instead?
Let me know if you have any questions. Sorry about the confusion.
Hi @niansong1996, I have submit a new PR #48, could you please have a look at it?
This time, I add an instruction in README about how to install transformers
library from source to run LLAMA-based models, rather than modifying the requirement.txt
.
Working on #30 for this PR