weizhepei / InstructRAG

InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales
https://weizhepei.com/instruct-rag-page
MIT License
56 stars 5 forks source link

A question about cache_dir #2

Closed NoW0NDER closed 1 month ago

NoW0NDER commented 5 months ago

Dear author, would you please tell me what is this cache? thx a lot.

~/workspace/InstructRAG-main$ sh generate_rationale.sh usage: inference.py [-h] [--dataset_name DATASET_NAME] [--rag_model {InstructRAG-FT,InstructRAG-ICL}] [--model_name_or_path MODEL_NAME_OR_PATH] [--load_local_model] [--do_rationale_generation] [--n_docs N_DOCS] [--output_dir OUTPUT_DIR] [--cache_dir CACHE_DIR] [--prompt_dict_path PROMPT_DICT_PATH] [--temperature TEMPERATURE] [--max_tokens MAX_TOKENS] [--seed SEED] [--max_instances MAX_INSTANCES] inference.py: error: argument --cache_dir: expected one argument

NoW0NDER commented 5 months ago

I think I have fixed this problem. So vllm needs to download the models manually instead of automatically?

weizhepei commented 5 months ago

Hi @NoW0NDER, many thanks for bringing this up! The argument --cache_dir is actually optional, and I've updated the script accordingly. Let me know if you have any further questions.