UMass-Foundation-Model / Co-LLM-Agents

[ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"
https://vis-www.cs.umass.edu/Co-LLM-Agents/
221 stars 31 forks source link

Inference time #22

Closed jliu4ai closed 6 months ago

jliu4ai commented 7 months ago

Hi, could you please tell me the inference time for tdw? On my desktop, it takes around 50 hours, is it normal?

Icefoxzhx commented 7 months ago

Could you provide more details on the question of "inference time for tdw"? Taking around 50 hours for one episode does not look normal. Could u check if the GPU is indeed in use? (e.g. check nvidia-smi to see if TDW takes around 1g mem)

jliu4ai commented 7 months ago

thanks for the response. there are 24 episodes in tdw, each of them takes around 2 hours to infer, is this a normal inference speed?

StigLidu commented 6 months ago

Running TDW on GPU is much faster than that on CPU. Our experiment ran on an RTX2080, which takes about 20-30 minutes to finish an episode.

nchuly commented 2 months ago

same problem,looks like it takes more than 10 hours to load one episode ! could anyone help me?