Closed Kammagod closed 3 years ago
I wonder if the blank space inside could be a 3D model?
It may be due to the cropping in line 222 of modelSVR.py
self.data_pixels = np.reshape(data_dict['pixels'][:,:,offset_y:offset_y+self.crop_size, offset_x:offset_x+self.crop_size], [-1,self.view_num,1,self.crop_size,self.crop_size])
Basically, this part center-crops the 137^2 input image into a 128^2 image. It was originally designed for random-cropping as a data augmentation process, but was later removed. Now it is just center-cropping.
Please do the same thing for your input image, if you are using rendered views from 3D-R2N2.
Also, please make sure your input image has white background. You need to handle the alpha channel carefully.
It may be due to the cropping in line 222 of modelSVR.py
self.data_pixels = np.reshape(data_dict['pixels'][:,:,offset_y:offset_y+self.crop_size, offset_x:offset_x+self.crop_size], [-1,self.view_num,1,self.crop_size,self.crop_size])
Basically, this part center-crops the 137^2 input image into a 128^2 image. It was originally designed for random-cropping as a data augmentation process, but was later removed. Now it is just center-cropping.
Please do the same thing for your input image, if you are using rendered views from 3D-R2N2.
Also, please make sure your input image has white background. You need to handle the alpha channel carefully.
Thank you,I have read thetest_image
code, its contains the suggestions you mentioned,however,it does not work.
Aha,I want to use your code as a refiner for small objects in an indoor reconstruction project.I tried to modify the test_image
code,Can you give me some advice.
test_image does not center-crop the image. You can view a few example input images of the training data from the provided hdf5 file. If your images are similar to those training images, then the model should work. Anyway, you could always re-train the model with your own data to make it work, and perform data augmentation to make it robust.
_testimage does not center-crop the image. You can view a few example input images of the training data from the provided hdf5 file. If your images are similar to those training images, then the model should work. Anyway, you could always re-train the model with your own data to make it work, and perform data augmentation to make it robust.
I'm sorry for replying you so long, but I still can't understand what you mean. I found a center-crop in the test_image
code, and I have a rendered views(137*137) from 3D-R2N2. May I ask what I need to do to reconstruction it.
Please read the code carefully. There is no center-crop in test_image. You need to add center-crop to the code if your views are from 3D-R2N2.
Please read the code carefully. There is no center-crop in _testimage. You need to add center-crop to the code if your views are from 3D-R2N2.
imgo_ = cv2.imread(img_add, cv2.IMREAD_GRAYSCALE) imgo_=imgo_[4:133,4:133] batch_view_ = cv2.resize(imgo_, (self.crop_size,self.crop_size)).astype(np.float32)/255.0 batch_view_ = np.reshape(batch_view_, [1,1,self.crop_size,self.crop_size])
I've modified the code, but it's still not good, and I'm using rendered views(137*137) from 3D-R2N2
Also, please make sure your input image has white background. You need to handle the alpha channel carefully.
It seems you forgot to handle the alpha channel. Please use the code below.
img = cv2.imread(img_add, cv2.IMREAD_UNCHANGED)
imgo = img[:,:,:3]
imgo = cv2.cvtColor(imgo, cv2.COLOR_BGR2GRAY)
imga = (img[:,:,3])/255.0
img = imgo*imga + 255*(1-imga)
img = np.round(img).astype(np.uint8)
offset_x = int(self.crop_edge/2)
offset_y = int(self.crop_edge/2)
img = img[offset_y:offset_y+self.crop_size, offset_x:offset_x+self.crop_size]
batch_view_ = cv2.resize(img, (self.crop_size,self.crop_size)).astype(np.float32)/255.0
batch_view_ = np.reshape(batch_view_, [1,1,self.crop_size,self.crop_size])
batch_view_ = cv2.resize(img, (self.crop_size,self.crop_size)).astype(np.float32)/255.0 batch_view_ = np.reshape(batch_view_, [1,1,self.crop_size,self.crop_size])
It works, thank you very much!!!!!!!!
Hi, I have followed your advice and used images from
the rendered views are from 3D-R2N2.
I was wondering if you usedShapeNet rendered images http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
directly , my results results are not so good