Closed Reason-Wang closed 7 months ago
Hi, @Reason-Wang . Yes, actually that's how we conducted the evaluation. You may host your model as a server with fastchat and write a configuration file for it. Please see configs/agents/fs_agent.yaml
for reference.
Great! Thanks for your answer.
My problem is that my server can not run docker. So I rent a server without gpu to run the task. My question is if there is a way to run model and task separately. For example, load the model in the server with a gpu and when testing, it send a request to the task server and the task sever return an output.