AkariAsai / self-rag

This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
https://selfrag.github.io/
MIT License
1.76k stars 162 forks source link

some problem with run_long_form_static.py #76

Closed pzwstudy closed 4 months ago

pzwstudy commented 4 months ago
  1. When I finished executing ASQA.sh, there was no result in my ASQA.out file : WARNING 05-09 14:07:49 config.py:467] Casting torch.bfloat16 to torch.float16. INFO 05-09 14:07:49 llm_engine.py:73] Initializing an LLM engine with config: model='/mnt/data/home/usera6k04/project/self-rag/llama2-7b', tokenizer='/mnt/data/home/usera6k04/project/self-rag/llama2-7b', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir='.cache', load_format=auto, tensor_parallel_size=1, quantization=None, enforce_eager=False, seed=0) INFO 05-09 14:08:20 llm_engine.py:223] # GPU blocks: 302, # CPU blocks: 512 INFO 05-09 14:08:22 model_runner.py:394] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. INFO 05-09 14:08:26 model_runner.py:437] Graph capturing finished in 4 secs. 2.when I execute FactScore.sh, a error in my FactScore.error.out `Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.80it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.80it/s]

Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s] Traceback (most recent call last): File "run_long_form_static.py", line 441, in main() File "run_long_form_static.py", line 380, in main "cat": item["cat"], "intermediate": intermediate["original_splitted_sentences"][0]}) KeyError: 'original_splitted_sentences'`