OSU-NLP-Group / LLM-Planner

[ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
https://osu-nlp-group.github.io/LLM-Planner/
MIT License
129 stars 11 forks source link

Some issues currently #17

Open BatmanofZuhandArrgh opened 2 months ago

BatmanofZuhandArrgh commented 2 months ago

Hi, Just fyi:

BatmanofZuhandArrgh commented 2 months ago

image

y'all are driving me crazy. THis is in run_eval.py

lxsy-xcy commented 1 month ago

Did you reproduce this codebase without any bugs?I'm trying to reproduce it, but get a lot "Nothing Happens"

BatmanofZuhandArrgh commented 1 month ago

@lxsy-xcy No i did a lot of modification before it works, and still has lots of bugs

lxsy-xcy commented 1 month ago

@lxsy-xcy No i did a lot of modification before it works, and still has lots of bugs

Sorry to hear that

charlotteannchen commented 1 month ago

I've successfully reproduced it, but I also received a lot of 'Nothing Happens' (almost all of the observations) while evaluating the 'eval_in_distribution' split. Is this normal? There are 0 plan in completed_plans in each task.

BatmanofZuhandArrgh commented 1 month ago

@charlotteannchen I believe Nothing Happens is because of the natural language instruction output by the model is in the wrong format. In my installation of alfworld, it should look like "go to fridge 1", "take spoon 1" or whatever, iirc. You gotta search what the correct grammar is in your installation tho

chanhee-luke commented 4 weeks ago

Hi, there is an error with the underlying simulator (i.e. simulator can't locate the object). So we reverted the code to only generated high-level plans for now. We are working on a fix right now. In the meantime, I recommend a recently released codebase (https://github.com/lbaa2022/LLMTaskPlanning) that covers a similar functionality. Thanks for the interest!