nickgkan / 3d_diffuser_actor

Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"
https://3d-diffuser-actor.github.io/
MIT License
227 stars 28 forks source link

RLbench dataset #12

Closed fengxiuyaun closed 8 months ago

fengxiuyaun commented 8 months ago

Hi, Thanks for your good work! I visualize the RLBench training set (https://huggingface.co/katefgroup/3d_diffuser_actor/blob/main/Peract_packaged.zip) and find that there is a significant difference from the peract raw(https://drive.google.com/drive/folders/0B2LlLwoO3nfZfkFqMEhXWkxBdjJNNndGYl9uUDQwS1pfNkNHSzFDNGwzd1NnTmlpZXR1bVE?resourcekey=0-jRw5RaXEYRLe2W6aNrNFEQ), as shown in the figure.

image image

Which one is OK?

twke18 commented 8 months ago

Hi, the figures you showed are from different tasks. Would you like to double check? Our packaged training data is re-rendered with the same initial object states as those provided from Peract. The only difference is in the target gripper poses, as the motion planner used to generate the expert demonstrations is still stochastic, even by setting the same random seed state.

nickgkan commented 8 months ago

Hi, I suppose you mean texture difference. I suspect the images you loaded from our packaged data have values in the [-1, 1] range. If this is the case, your visualization code needs to be adjusted a bit, so that you map our images to the correct range (try [0, 1] float or [0, 255] uint8).

Other than that, what @twke18 wrote is correct, you should be able to see the same scenes for the same tasks.

fengxiuyaun commented 8 months ago

Oh, sorry. I understand now.

Hi, I suppose you mean texture difference. I suspect the images you loaded from our packaged data have values in the [-1, 1] range. If this is the case, your visualization code needs to be adjusted a bit, so that you map our images to the correct range (try [0, 1] float or [0, 255] uint8).

Other than that, what @twke18 wrote is correct, you should be able to see the same scenes for the same tasks.