Closed Eddylib closed 5 years ago
Actually the server in cerebro/script can convert the input dimensions, that doesn't work is it?
It doesn't seem to work. I see
assert (cv_image.shape[0] == self.im_rows and cv_image.shape[1] == self.im_cols and cv_image.shape[2] == self.im_chnls),
before images fed into network.
and got output of:
`
Connection to ros-service /whole_image_descriptor_compute
established
[ERROR] [1565346270.317093]: Error processing request:
[whole_image_descriptor_compute_server] Input shape of the image does not match with the allocated GPU memory. Expecting an input image of size 480x752x3, but received : (480, 752, 1)
['Traceback (most recent call last):\n', ' File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 629, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/libaoyu/Desktop/vins_kidnap_ws/src/cerebro/scripts/whole_image_desc_compute_server.py", line 452, in handle_req\n size %dx%dx%d, but received : %s" %(self.im_rows, self.im_cols, self.im_chnls, str(cv_image.shape) )\n', 'AssertionError: \n[whole_image_descriptor_compute_server] Input shape of the image does not match with the allocated GPU memory. Expecting an input image of size 480x752x3, but received : (480, 752, 1)\n']
[ERROR] [1565346270.317931070]: Service call failed: service [/whole_image_descriptor_compute] responded with an error: error processing request:
[whole_image_descriptor_compute_server] Input shape of the image does not match with the allocated GPU memory. Expecting an input image of size 480x752x3, but received : (480, 752, 1)
`
I recommend you use a model with 1 input channel. In the link above I have open sourced several models. Try picking one with 1 channel.
ok, i'll try. if it is correct to resize image while the input size does not match the model?
Better to resize the model. Since it is a fully convolutional network, we just need to change the 1st layer to resize the model. This is what I precisely do, if you are wanting to compute the descriptor in standalone way, look at this function: https://github.com/mpkuse/cerebro/blob/1f02343865a94d04a8e4b3625f17c6d45b5275de/scripts/predict_utils.py#L165 I use this to resize the model. Feel free to copy it to your script.
ok, i got you, thanks a lot!
Hi, I found that your provided models take input image of fixed size of 480x752x3 or 240x320x3 or 240x320x1, but datasets(Euroc, mynt_bags) associate to this project contains images of size of 480x752x1, so I failed to run whole_image_descriptor_compute node. Could you please help me to fix this?