Closed fengxiuyaun closed 8 months ago
Hi, the figures you showed are from different tasks. Would you like to double check? Our packaged training data is re-rendered with the same initial object states as those provided from Peract. The only difference is in the target gripper poses, as the motion planner used to generate the expert demonstrations is still stochastic, even by setting the same random seed state.
Hi, I suppose you mean texture difference. I suspect the images you loaded from our packaged data have values in the [-1, 1] range. If this is the case, your visualization code needs to be adjusted a bit, so that you map our images to the correct range (try [0, 1] float or [0, 255] uint8).
Other than that, what @twke18 wrote is correct, you should be able to see the same scenes for the same tasks.
Oh, sorry. I understand now.
Hi, I suppose you mean texture difference. I suspect the images you loaded from our packaged data have values in the [-1, 1] range. If this is the case, your visualization code needs to be adjusted a bit, so that you map our images to the correct range (try [0, 1] float or [0, 255] uint8).
Other than that, what @twke18 wrote is correct, you should be able to see the same scenes for the same tasks.
Hi, Thanks for your good work! I visualize the RLBench training set (https://huggingface.co/katefgroup/3d_diffuser_actor/blob/main/Peract_packaged.zip) and find that there is a significant difference from the peract raw(https://drive.google.com/drive/folders/0B2LlLwoO3nfZfkFqMEhXWkxBdjJNNndGYl9uUDQwS1pfNkNHSzFDNGwzd1NnTmlpZXR1bVE?resourcekey=0-jRw5RaXEYRLe2W6aNrNFEQ), as shown in the figure.
Which one is OK?