liruiw / GenSim

Generating Robotic Simulation Tasks via Large Language Models
https://liruiw.github.io/gensim
MIT License
294 stars 24 forks source link

prompts/data/base_tasks.json Not Found #2

Closed Kami-code closed 1 year ago

Kami-code commented 1 year ago

Hi @liruiw . Thanks for providing such a great project. When I was running the command in your examples, I caught an error as follows. It seems that there is some files missing. Can you help me to figure out it? Thanks again!

python autosim/run_simulation.py disp=True prompt_folder=bottomup_task_generation_prompt save_memory=True load_memory=True task_description_candidate_num=10 use_template=True

pybullet build time: Mar 26 2022 03:00:52 /home/baochen/anaconda3/envs/hand_teleop-master/lib/python3.8/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'data': Defaults list is missing _self_. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information warnings.warn(msg, UserWarning) use gpt model: gpt-4-0613 Error executing job with overrides: ['disp=True', 'prompt_folder=bottomup_task_generation_prompt', 'save_memory=True', 'load_memory=True', 'task_description_candidate_num=10', 'use_template=True'] Traceback (most recent call last): File "autosim/run_simulation.py", line 36, in main memory = Memory(cfg) File "/home/baochen/Desktop/projects/GenSim/autosim/memory.py", line 26, in init base_tasks, base_assets, base_task_codes = self.load_offline_memory() File "/home/baochen/Desktop/projects/GenSim/autosim/memory.py", line 104, in load_offline_memory base_tasks = json.load(open(base_task_path)) FileNotFoundError: [Errno 2] No such file or directory: 'prompts/data/base_tasks.json'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

liruiw commented 1 year ago

Thanks for pointing out the missing files. I just made an update. You can pull and try again.

Kami-code commented 1 year ago

Thanks for responding so immediately !

Kami-code commented 1 year ago

Hi @liruiw . I have two more questions to ask. How much does it cost to produce your work? I mean the tokens you requested or the money you spent for each new environment. And also, we know that sometimes the created tasks may fail for many reasons, for example being parsed incorrectly or cannot solved by oracle agent, so I would like to know the average success rate when we try to create a new environment. This information is important to me. Thanks you very much!

liruiw commented 1 year ago

Each prompt chain to generate the environment has 5 steps in the chain and has total tokens around ~20000 (I am not sure about this actually, but would be great if someone will measure it). When you run the demo code, it will print out the average success rates, which can depend on the complexity of the prompts and the complexity of the environments that you try to generate. I use a $1000 budget for the entire project (with hundreds of trials and actual experiments).

Kami-code commented 1 year ago

Thanks for your valuable information!