OpenBMB / UltraFeedback

A large-scale, fine-grained, diverse preference dataset (and models).
MIT License
297 stars 16 forks source link

Code usage description #6

Open BramVanroy opened 10 months ago

BramVanroy commented 10 months ago

Hello

Thank you for your work! I'd like to build on your scripts to do something similar for a different language. Could you please describe the workflow that one needs to follow to reproduce your work? I.e., which scripts to work in which order, where to store downloaded flan, trufulqa, ... datasets, etc. It is unclear to me now how to continue.

Thanks!

Bram

lifan-yuan commented 8 months ago

Sorry for not getting back to you sooner.

To use this code to annotate data, you may prepare your data under . /completion_data, with each line in the JSON file containing at least one key: {"instruction": INSTRUCTION}. Then, run sample.py to sample four models for each instruction.

Once you have specified the model path in main.py or main_vllm.py, you can generate completions for each instruction using the models sampled before. These two files have the same outcome, but we recommend using vllm for accelerated generation.

Finally, you can run annotate_preference.py for fine-grained annotations or annotate_critique.py for critique and overall scores.