THUDM / AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
https://llmbench.ai
Apache License 2.0
2.03k stars 138 forks source link

Separate server for task and model #81

Closed Reason-Wang closed 7 months ago

Reason-Wang commented 7 months ago

My problem is that my server can not run docker. So I rent a server without gpu to run the task. My question is if there is a way to run model and task separately. For example, load the model in the server with a gpu and when testing, it send a request to the task server and the task sever return an output.

zhc7 commented 7 months ago

Hi, @Reason-Wang . Yes, actually that's how we conducted the evaluation. You may host your model as a server with fastchat and write a configuration file for it. Please see configs/agents/fs_agent.yaml for reference.

Reason-Wang commented 7 months ago

Great! Thanks for your answer.