Closed NoW0NDER closed 1 month ago
I think I have fixed this problem. So vllm needs to download the models manually instead of automatically?
Hi @NoW0NDER, many thanks for bringing this up! The argument --cache_dir is actually optional, and I've updated the script accordingly. Let me know if you have any further questions.
Dear author, would you please tell me what is this cache? thx a lot.
~/workspace/InstructRAG-main$ sh generate_rationale.sh usage: inference.py [-h] [--dataset_name DATASET_NAME] [--rag_model {InstructRAG-FT,InstructRAG-ICL}] [--model_name_or_path MODEL_NAME_OR_PATH] [--load_local_model] [--do_rationale_generation] [--n_docs N_DOCS] [--output_dir OUTPUT_DIR] [--cache_dir CACHE_DIR] [--prompt_dict_path PROMPT_DICT_PATH] [--temperature TEMPERATURE] [--max_tokens MAX_TOKENS] [--seed SEED] [--max_instances MAX_INSTANCES] inference.py: error: argument --cache_dir: expected one argument