tianyi-lab / Cherry_LLM

[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
306 stars 21 forks source link

May I ask if this project is suitable for other large models, such as the Baichuan model, to filter high-quality datasets from other fields #6

Closed wuQi-666 closed 1 year ago

wuQi-666 commented 1 year ago

May I ask if this project is suitable for other large models, such as the Baichuan model, to filter high-quality datasets from other fields

MingLiiii commented 1 year ago

Hi, thanks for your interest. We have tried our method on llama2-7b and 13b, it works fine.

Our method is built on a widely accepted hypothesis that LLM has learned all the knowledge it needs and it only needs to learn the alignment from instruction tuning. Thus we believe as long as the other LLMs satisfy this hypothesis, it should work fine.

As for Baichuan, I don't know much about this model, and I don't find a lot of instruction-tuned models based on it, the only one that I found is Baichuan-13B-Chat which has a win rate of 21.80%. Are there any other models based on it?

wuQi-666 commented 1 year ago

Thanks for your reply, now I am using the Baichaun2-13B-Chat model for validation on the "alpaca_data.json" dataset. Or can you tell me about other general methods that can automatically filter the high-quality fine-tuning data required by large models during the process of command fine-tuning on large models.

MingLiiii commented 1 year ago

I am also curious about how Baichaun2-13B-Chat will perform on the current commonly used instruction tuning dataset. If possible, please keep me updated~

I have to say I haven't seen many other methods that can be used to automatically filter the high-quality instruction tuning data due to its special properties. There are methods using chatGPT as the filter, but I don't think that is what you need. Sorry about that.

tuqingwen commented 10 months ago

Thanks for your reply, now I am using the Baichaun2-13B-Chat model for validation on the "alpaca_data.json" dataset. Or can you tell me about other general methods that can automatically filter the high-quality fine-tuning data required by large models during the process of command fine-tuning on large models. Hello! May I ask if you have successfully implemented high-quality data filtering for the Cherry project using Baichaun2-13B Chat? When I called Baichaun2-7B Chat, an error occurred as shown in the following figure. 屏幕截图 2024-01-17 111507 from transformers import AutoTokenizer,AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path, device_map="auto", trust_remote_code=True, torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, trust_remote_code=True, torch_dtype=torch.float16) model.eval()