Open isekulic opened 1 year ago
Thanks! I will look into this problem this week.
@zqwerty, great thanks! I'm looking forward to your answer :)
I am sorry for the missing of the LLM documentation. I will add a README
.
LLM_US is a little bit different from previous user simulators: it is end-to-end and takes the user goal in natural language. So it is not PipelineAgent
(that's why I make LLM_US
inherit Agent
class). You can refer to the unit test function test_LLM_US_RG
in llm/user_similator.py
for example usage. You can replace the LLaMa
model with ChatGPT to get much better performance.
For interaction between LLM_US
and other pipeline agents, I will try to write an example script like #152.
@zqwerty thank you for your comment! Indeed, I've seen the example usage in llm/user_similator.py
.
I'm looking forward to see an example script then, which would enable the full interaction (like #152 that would enable evaluation of the LLM simulators and make it comparable to e.g., Table 8 in your paper).
same issue here, could you please give an example of training any agent with LLM usersim?
Describe the feature Thank you for your work and for adding LLM-based models to the platform. I would be very grateful to see a working example of LLM-based user simulator in the examples.
Expected behavior The expected behavior is a working script that showcases example of evaluating LLM-based user simulator within the framework. For example, like the issue #152 where the script evaluates TUS within the framework, but with newly-added LLM-based models.
Additional context It seems like examples and documentation (READMEs) were not maintained according to the most recent changes (i.e., adding LLM-based models). It is not clear how to use e.g.,
PipelineAgent
and LLM-based models. One example is thatLLM_US.init_session(self, goal, example_dialog:str=None)
requiresgoal
to be set, but that is not possible from thePipelineAgent
class. Thank you for your aid :)