jongchyisu / mvcnn_pytorch

MVCNN on PyTorch
MIT License
229 stars 76 forks source link

Using Blender Script to Make Datasets #5

Open weixmath opened 5 years ago

weixmath commented 5 years ago

Hello! I used the datasets you provided and got the ideal result. But when i try to use the blender script in your project-web to make other experiments(like other view numbers),the accuracy is much lower. Do you preprocess the data in other method beside the blender script? Thank you!

jongchyisu commented 5 years ago

How many views and what accuracy did you get? For different number of views you can just use the rendered images I provided directly, no need to render again right? I didn't do other preprocess.

weixmath commented 5 years ago

I rendered the image for 12 views and keep the same setting in trainning stage1 and get just 91% accuracy. But when i use your datasets, it can reach 94% in stage1. I compare two datasets and find there just a little difference in the edge of the objects. Maybe the blender setting diffrence?

jongchyisu commented 5 years ago

About the script for rendering shaded images, I actually lost the original file. When I was trying to reproduce the rendering, I didn't re-run the training again. Is it very different?

weixmath commented 5 years ago

Yes ,your datasets can achieve 94% in stgea1 and 96.1% in stage2 after fintuning. But the dataset i render using the script can only achieve 91% and 93% in stage2. Can you rerender one object? They look very similar but a little differences in pixel values for edge of objects. Maybe you can have a try in ModelNet10, yours is 98% and mine is 93% in stage1. I don't know why they have such a diiference in just image classification accuracy. Thank you!

jongchyisu commented 5 years ago

I rendered again and the difference is very very tiny, almost not noticeable. I'm curious about what do your renderings look like. Maybe the blender version is different? This zip file includes the original and the new renderings of airplane_0001, could you check again? https://drive.google.com/open?id=1hMt-8juV9I4Ah2szbBrGo8Gw4s2ft0fP Also the new renderings have alpha channels, but I guess it'll be dropped when loading data.

weixmath commented 5 years ago

I check your two rendering images. My rendering is exactly the same as your new rendering images even in every pixel value. And for your new and original, ther look very similar. But if you load and check in python, they have differences in the edge. Can you use the the new rendered ModelNet10 in 12 views to train the stage1? The original can achieve 98% in 30 epoches.

weixmath commented 5 years ago

3 This is the the picture that equals airplane_0001_ori - airplane_0001_new. It may suggest the original image has the higher values than the new image. So could you rethink the render setting in original render script? It actually cuase a great difference in the results. According to you paper, object boundary is vital to the task.

jongchyisu commented 5 years ago

I tried to render more models again, but the difference is really small. On airplane_0001, the object orientation is slightly different, but on other 5 models we checked there's almost no difference. I don't think this slight difference will make large performance drop, so maybe there are some images of some models are very different. Could you find an example that are very different? If you could find one then maybe we can figure out the issue of the blender script. So far the only difference is that we used older version blender when we rendered images one year ago.

weixmath commented 5 years ago

Thanks for your reply. I check other models and also find the differences are really slight. However, the experiment results are totally different. Could you rerun the stage1 using the new rendered datasets?

weixmath commented 5 years ago

Sorry to bother you, but it is really strange for me. I am very curious about why these two similar datasets have such a gap in classification accuracy. Could you try it only in modelnet10 which may take only fewer than 3 hours.

jongchyisu commented 5 years ago

Sorry I also couldn't figure it out at the moment. Are you still having the same result?

weixmath commented 5 years ago

Yes, it still has the great drop. You can check it when you are free.

jongchyisu commented 5 years ago

While I'm suspecting the discrepancy is coming from the blender version (the scene.select function which fits the object to the camera view may have changed), other people reported similar accuracy (94.47%) where they modified my belender script and rendered again: https://arxiv.org/pdf/1904.00993.pdf Specifically, they told me that they "remove the scene.select function, and added a bounding box and change the property and position of 'sun' to make our rendering satisfying the equivariance" for their paper.

weixmath commented 5 years ago

Ok,thank you. I will try. And there was a small mistake in your code.
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=args.batchSize, shuffle=False, num_workers=0) Did you forget the 'test_mode = True' ?

gfsliumin commented 5 years ago

Hi, I just get a "render_depth.blend" when I click the "Blender script for rendering depth images" link, do you have the python or other scripts for rendering depth images?

fengshi-cherish commented 4 years ago

Does someone tell me how to use these blender file?

siyuan2018 commented 3 years ago

Does someone tell me how to use these blender file?

Hi, did you succeed in using the blender file? Thanks

richardlzx commented 3 years ago

@weixmath Do you mean that the test overall accuracy can be 94% in stgea1 and 96.1% in stage2 ?But in the mvcnn paper the evaluate accuracy cann't so high.I am confused about it

piseabhijeet commented 3 years ago

Does someone tell me how to use these blender file?

Download Blender 2.92 and open the .blend script. Change the default ModelNet paths and modify the code if you are using a custom dataset.

If you are using .stl files, install and load the stl plugin and replace the default object loading function and it should work!

zhangdongwei1998 commented 2 years ago

Does someone tell me how to use these blender file?

Download Blender 2.92 and open the .blend script. Change the default ModelNet paths and modify the code if you are using a custom dataset.

If you are using .stl files, install and load the stl plugin and replace the default object loading function and it should work!

I open the .blend script, but do not generate png pictures, would you please give me some advice? Thanks.

zhangdongwei1998 commented 2 years ago

Ok,thank you. I will try. And there was a small mistake in your code. val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=args.batchSize, shuffle=False, num_workers=0) Did you forget the 'test_mode = True' ?

I also confused it about it. But I think you are right.

Mona-Alzahrani commented 4 months ago

Is any one have the shading blender script that works with the new version of the blender (v4.1)?