1989Ryan / llm-mcts

[NeurIPS 2023] We use large language models as commonsense world model and heuristic policy within Monte-Carlo Tree Search, enabling better-reasoned decision-making for daily task planning problems.
https://llm-mcts.github.io/
Apache License 2.0
142 stars 14 forks source link

What is the code for LLM-MCTS result corresponding to? #1

Closed weizhenFrank closed 6 months ago

weizhenFrank commented 6 months ago

Hi,

Thanks for sharing your code for your paper "Large Language Models as Commonsense Knowledge for Large-Scale Task Planning". L-model and L-policy framework is a very interesting and quite effective idea!

I'm trying to run your code, and I have two questions,

  1. The original code can't find data/object_info.json, so I copy vh/data_gene/dataset/object_info.json to ./data/object_info.json, but I also find another object_info.json in vh/data_gene/gen_data/data/object_info.json, whic one should I use?

  2. I used vh/data_gene/dataset/object_info.json, and copy "objects_switchonoff" from vh/data_gene/gen_data/data/object_info.json to ./data/object_info.json as vh/data_gene/dataset/object_info.jsondon't have such part. So I run the code for LLM-MCTS. But finally I got succ rate: 0.3125. Is this correct?

Can you explain the code generate which result?

    --exploration_constant 24 \
    --max_episode_len 50 \
    --max_depth 20 \
    --round 0 \
    --simulation_per_act 2 \
    --simulation_num 100 \
    --discount_factor 0.95  \
    --uct_type PUCT \
    --mode simple \
    --seen_item \
    --seen_apartment\
    --seen_comp

Is this Seen Home and NovelComp.(3)in your Table 1?

image
1989Ryan commented 6 months ago

Thank you for your feedback. The file has been uploaded, and the bug has been fixed. Please check out the latest code.