In your paper, you mentioned details for training toolllama in appendix, We train the model in a multi-round conversation mode.
But, in this code, tool-llama-multi-rounds is not supported.
How did you train toolllama? Is tool-llama-single-round enough to reproduce results?
Hi ! Thank you for great works.
I just wonder how to train tool-llama.
In your paper, you mentioned details for training toolllama in appendix,
We train the model in a multi-round conversation mode
. But, in this code,tool-llama-multi-rounds
is not supported. How did you train toolllama? Istool-llama-single-round
enough to reproduce results?