Closed coderlemon17 closed 7 months ago
Hi, there! ~ Thanks for your question. Yes, you should use the other branch to run the dexterous hand environment. The dexterous hand environment based on VRL3 algorithm, and it should use demonstrations to learn these tasks. For simplicity and reproducibility, I create a new branch for this environment. :)
Hi, thanks for your reply, I have a few more questions.
total_frame_collected / total training time
or frame_in_a_batch / time_to_calculate_a_batch
? Also, shouldn't this metric heavily depend on the hardware that the algorithms are evaluated with? (i.e. Different CPU, GPU might result in significantly different results.)Thanks for your question!
logger.py
to know the details of how to calculate the FPS, and the FPS indeed relies on your hardware. I test them on Tesla A40. Thank you for your reply.
Sorry, I missed Fig. 19., it's indeed what I want. However, if it is possible, could you kindly provide the results for each separate task, and maybe the data for plotting? We would like to cite it in our paper.
Also about the evaluation pipeline regarding robosuite in Fig. 19, I haven't found a script for it. Based on my understanding:
train
mode.eval-$DIFFICULTY
.I think there's a small typo here, where eval_easy
-> eval-easy
Thx for your checking! Could you provide me with your email? (or you can send me an email) I can give you the results, model, data etc..
Hi, thanks for your great work. However, I have trouble finding the
Dexterous Hand Manipulation
mentioned in Sec. 4.1.3. of the paper. I searched the whole repo for Adroit / color_hard, but I could not find any code related to the experiment results.I noticed there's a separate branch called ViGen-adroit, do I have to checkout to that branch to use the generalization environment?