maitrix-org / llm-reasoners

A library for advanced large language model reasoning
https://www.llm-reasoners.net/
Apache License 2.0
1.44k stars 114 forks source link

React HotpotQA #110

Closed AU-CPU closed 2 days ago

AU-CPU commented 3 days ago

CUDA_VISIBLE_DEVICES=0 python3 -m torch.distributed.run --nproc_per_node 1 examples/ReAct/hotpotqa/inference.py --base_lm llama3 --model_dir /xx/llama3-8b/ --llama_size "8B" --batch_size 1 --temperature 0 Unable to run inference.py correctly.

reasoner = Reasoner( world_model=HotpotqaWorldModel(world_base_model,toolset), search_config=HotpotqaSearchConfig(base_model), search_algo=GreedySearch(7) ) TypeError: Hotpotqaevaluator.init() missing 1 required positional argument: 'toolset' It seems that the reason for the lack of definition of toolset here is that I did not find the definition of toolset in inference. I hope to get a reply.

AU-CPU commented 3 days ago

@@Yanbin-Yin

Yanbin-Yin commented 3 days ago

line6: from reasoners.tools import wikisearch,wikilookup,wikifinish

In this example we just use these three tools. And you can just use these tools to define a toolset.

AU-CPU commented 3 days ago

I use from reasoners.tools import wikisearch,wikilookup,wikifinish and change code to " if "Search" in action: new_state = current_state + self.toolset[0](env=self.base_model, step_idx=step_idx, action=action, thought=thought)

new_state = current_state + HotpotQATools.wikisearch(env=self.base_model, step_idx=step_idx, action=action, thought=thought)

            print("Search tool is used")
        elif "Lookup" in action:
            new_state = current_state + self.toolset[0](env=self.base_model, step_idx=step_idx, action=action, thought=thought)
            # new_state = current_state + HotpotQATools.wikilookup(env=self.base_model, step_idx=step_idx, action=action, thought=thought)
            print("Lookup tool is used")
        elif "Finish" in action:
            new_state = current_state + self.toolset[0](env=self.base_model, step_idx=step_idx, action=action, thought=thought)
            # new_state = current_state + HotpotQATools.wikifinish(env=self.base_model, step_idx=step_idx, action=action, thought=thought)
            print("Finish tool is used")"

it can be run, but the acc=0

AU-CPU commented 3 days ago

correct=False, output=None, answer='Chief of Protocol';accuracy=0.000 (0/2)
such as this

Yanbin-Yin commented 3 days ago

This means that you doesn't actually use these tools. We have three tools, but you just use toolset[0].

Yanbin-Yin commented 3 days ago

You can print "new_state" and "current_state" to find whether these tools are used.

AU-CPU commented 3 days ago

Looks like the error is here, I print try: print("action",action) thought, action = action.strip().split(f"\nAction {step_idx}: ") found action exceed maxlength

AU-CPU commented 3 days ago

Looks like the error is here, I print try: print("action",action) thought, action = action.strip().split(f"\nAction {step_idx}: ") found action exceed maxlength

I guess this is because I used Llama-2-70B-GPTQ through ExLlamaModel. Because the meta-llama/Meta-Llama-3-8B I found does not have the params.json file, so I cannot use llama3 correctly. Can you provide the address of the llama you are using? @Yanbin-Yin

AU-CPU commented 2 days ago

has been resolved