Closed sea-hot closed 7 months ago
You can find all the details of our approach in our paper here: https://arxiv.org/abs/2403.12761 Regarding the output in the image in point 4. Yes, something is not working properly because we never encountered a completely gibberish output. Sometimes the LLM gets stuck in an endless generation, but usually, it keeps the behavior tree syntax. I suggest you look at our example prompts here: https://github.com/AIRLab-POLIMI/BTGenBot/tree/master/prompt
"Your assignment seems really interesting and impressive. So, I have a few questions.
method 1) For the example_task, when a simple text command is input, 2) it produces results according to a predefined output format, 3) and for the assignment, if an explanation for the example_task, such as place, object, coordinates, etc., is provided, 4) is it correct that in fine-tuning, some of the output keywords in the output are changed?"
I might have missed it, but is it correct that it is still impossible to design a behavior tree accordingly if an abstract command is input without precise information (no place) for the high-level task, and then supplemental explanations (place, object, coordinates) are inputted for the assignment?
Or, is there a specific XML syntax that must be used whenever an abstract command is input?
Sometimes, meaningless words follow, similar to photos. Does this mean it's not working properly?
Your generosity has truly made a significant impact, and I am immensely thankful for everything.