BerkeleyAutomation / sd-maskrcnn

Code for SD Mask R-CNN Project
https://sites.google.com/view/wisdom-dataset/home
Other
213 stars 53 forks source link

Need help with normalizing Depth Images and generating Segmask Images. #46

Closed Ray000-0 closed 4 years ago

Ray000-0 commented 4 years ago

Hi, I am trying to use pre-trained models with custom Depth Images.

Need Help: 1) How to normalize the Depth Images so that they can fit the pre-trained models provided by BerkeleyAutomation official? 2) How to generate the Segmask Images from Depth Images?

Thank you!

mjd3 commented 4 years ago

Hi @Ray000-0,

For your first question, check out #44. This explains the depth image class that we use for saving images, which includes the normalization step. For your second question, do you mean how can you generate the ground truth segmentation images for your own dataset? Or how do you use the pretrained model to get predictions from your dataset?

Ray000-0 commented 4 years ago

Hi @Ray000-0,

For your first question, check out #44. This explains the depth image class that we use for saving images, which includes the normalization step. For your second question, do you mean how can you generate the ground truth segmentation images for your own dataset? Or how do you use the pretrained model to get predictions from your dataset?

Hi,

For the first question, I want to know how to do the normalization to generate numpy array file from the tiff depth image that will be used in pre-trained model? I fixed the second question. Thank you so much!

mjd3 commented 4 years ago

I'm not sure I understand your question. What exactly is the input that you have and what do you wish to normalize?

Ray000-0 commented 4 years ago

I'm not sure I understand your question. What exactly is the input that you have and what do you wish to normalize? Hi, For example, in "gqcnn/scripts/policies/run_all_dex-net_3.0_examples.sh", there is a command line: "python examples/policy.py GQCNN-3.0 --depth_image data/examples/single_object/primesense/depth_0.npy --segmask data/examples/single_object/primesense/segmask_0.png --config_filename cfg/examples/replication/dex-net_3.0.yaml" For such line, I was wondering how to generate depth_0.npy from tiff depth images (in floating numbers). If I know how to do so, I can generate npy files with my own depth images (in floating numbers) and feed them directly (without tuning the weights) into the pretrained models, GQCNN-3.0, in this case. Thanks!

mjd3 commented 4 years ago

Sorry for the late response, but if your question is just how to convert from .tiff depth images to .npy, you can use any python image loading library (i.e., https://stackoverflow.com/questions/7569553/working-with-tiffs-import-export-in-python-using-numpy) then use np.save to save out the resulting image as an .npy file.