From the associated paper, it seems that hardware latencies can play a big part in the overall performance of the policy. I noticed that in eval_real.py there is a action_exec_latency variable that is used for filtering actions inferenced from the policy. At the same time, there is a separate latency value robot_action_latency, that is used in bimanul_umi_env.py. What is the difference between these two latency values? Should they be the same value?
hope it can develop UMI2 for easy use and assemble. and it can use "Scaling Up and Distilling Down Language-Guided Robot Skill Acquisition " for language control
From the associated paper, it seems that hardware latencies can play a big part in the overall performance of the policy. I noticed that in
eval_real.py
there is aaction_exec_latency
variable that is used for filtering actions inferenced from the policy. At the same time, there is a separate latency valuerobot_action_latency
, that is used inbimanul_umi_env.py
. What is the difference between these two latency values? Should they be the same value?