zorazrw / filco

[Preprint] Learning to Filter Context for Retrieval-Augmented Generaton
https://arxiv.org/pdf/2311.08377.pdf
Creative Commons Attribution Share Alike 4.0 International
183 stars 21 forks source link

About cxmi code #9

Open 2282588541a opened 8 months ago

2282588541a commented 8 months ago

I have try run this project on my device,the transformers version is 4.25.1.when I use model=transformers.AutoModelForSeq2SeqLM.from_pretrained("/datas/huggingface/Llama-2-7b-hf") it returnsKeyError: 'llama' AND I try use transformers in the newest version and change to model=transformers.AutoModelForCausalLM.from_pretrained("/datas/huggingface/Llama-2-7b-hf") .it returns ValueError: Expected input batch_size (9) to match target batch_size (1). Can you help me to solve this problem ,thank u!!!

zorazrw commented 7 months ago

What is the "/datas/huggingface/Llama-2-7b-hf" that you specified here? Is it the same as meta-llama/Llama-2-7b-hf provided by huggingface?

2282588541a commented 6 months ago

yes,just a local path

exceedzhang commented 5 months ago
image

The same question @zorazrw

DLiquor commented 4 months ago

image The same question @zorazrw

Hi, actually I found this mistake too. I this it is because the Input and label donot match the need of the decoder-only model.

input_ids += source_ids + target_ids labels += source_mask + target_ids

DLiquor commented 4 months ago

image The same question @zorazrw

Hi, actually I found this mistake too. I this it is because the Input and label donot match the need of the decoder-only model.

input_ids += source_ids + target_ids labels += source_mask + target_ids

Sadly, after I tackled the above issue, I got the result of sent_wise_diff as nan.