Open resistzzz opened 6 days ago
We construct the graph based on the dependencies between tasks rather than from ground-truth trajectories. Specifically, if the output of task A can serve as an input for it, we add a directed link from task A to task B.
Example: In HuggingFace, the output of Pose Detection
is a textual description of a pose, which can be used as input for Pose-to-Image
. Similarly, the output of Summarization
can serve as input for Translation
. And the code for constructing this dependency-guided graph is available in TaskBench's repository (https://github.com/microsoft/JARVIS/blob/main/taskbench/generate_graph.py, Lines 13-17).
Here are the specific details regarding the task graph construction for each dataset:
format_tool_graph_files
starting from Line 103). construct_task_graph
starting from Line 124). Sorry for the late reply. Should you have any further questions, please feel free to reach out!
After reading your paper, I have a question about the graph.
How you construct the task graph, or where is the task graph from? I guess perhaps the graph is constructed from the ground-truth task steps? That is, if there is a "TaskA after TaskB" in the ground-truth planning, we will have an edge "from TaskA to TaskB" in the graph.
I don't know whether my understanding is correct. Could you provide some explanation?
Thank you for your valuable contribution!
Best wish