Closed AliButtar closed 4 years ago
I have tried in colab with Tf version 2.3 and was able to reproduce the issue.Please, find the gist here. Thanks!
i get the same Error,did you solve this Error?
i get the same Error,did you solve this Error?
i solved it. it because 'image_shape' and 'num_proposals' node are the parameters that needs to be feed.so dont use them,and convert to tensor
i get the same Error,did you solve this Error?
i solved it. it because 'image_shape' and 'num_proposals' node are the parameters that needs to be feed.so dont use them,and convert to tensor
Can you go into a little more detail, and share the solution perhaps. It will help a lot. Thanks
i get the same Error,did you solve this Error?
i solved it. it because 'image_shape' and 'num_proposals' node are the parameters that needs to be feed.so dont use them,and convert to tensor
Can you go into a little more detail, and share the solution perhaps. It will help a lot. Thanks
Change "run_inference_for_single_image" function `def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis, ...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
need_detection_key = ['detection_classes','detection_boxes','detection_masks','detection_scores']
output_dict = {key: output_dict[key][0, :num_detections].numpy()
for key in need_detection_key}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
tf.convert_to_tensor(output_dict['detection_masks']), output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
`
i get the same Error,did you solve this Error?
i solved it. it because 'image_shape' and 'num_proposals' node are the parameters that needs to be feed.so dont use them,and convert to tensor
Can you go into a little more detail, and share the solution perhaps. It will help a lot. Thanks
Change "run_inference_for_single_image" function `def run_inference_for_single_image(model, image):
image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis, ...] # Run inference model_fn = model.signatures['serving_default'] output_dict = model_fn(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) need_detection_key = ['detection_classes','detection_boxes','detection_masks','detection_scores'] output_dict = {key: output_dict[key][0, :num_detections].numpy() for key in need_detection_key} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( tf.convert_to_tensor(output_dict['detection_masks']), output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict
`
Thank You so much, it worked
@AliButtar
Please, close this thread if your issue was resolved. Thanks!
i get the same Error,did you solve this Error?
i solved it. it because 'image_shape' and 'num_proposals' node are the parameters that needs to be feed.so dont use them,and convert to tensor
Can you go into a little more detail, and share the solution perhaps. It will help a lot. Thanks
Change "run_inference_for_single_image" function `def run_inference_for_single_image(model, image):
image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis, ...] # Run inference model_fn = model.signatures['serving_default'] output_dict = model_fn(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) need_detection_key = ['detection_classes','detection_boxes','detection_masks','detection_scores'] output_dict = {key: output_dict[key][0, :num_detections].numpy() for key in need_detection_key} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( tf.convert_to_tensor(output_dict['detection_masks']), output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict
`
hey i use the code you shared with us, but in run in new errors:
input has only 1 dims for '{{node strided_slice_8}}
input has only 1 dims for '{{node strided_slice_8}}
Look at my mentioned issue
Prerequisites
Please answer the following questions for yourself before submitting an issue.
1. The entire URL of the file you are using
The notebook: Object Detection Tutorial Notebook
The model used: http://download.tensorflow.org/models/object_detection/tf2/20200711/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.tar.gz
The colab notebook that uses the same notebook and the model: Colab Object Detection Tutorial Copy
2. Describe the bug
Error Name : InvalidArgumentError
Error Description : Index out of range using input dim 1; input has only 1 dims [Op:StridedSlice] name: strided_slice/
The code in the Object Detection Tutorial Notebook successfully infers using object detection models but fails to infer using using Mask RCNN Inception Model and causes the above error.
3. Steps to reproduce
Run the following Colab Notebook. It is same as the Object Detection Tutorial but is instead using Mask RCNN Inception Resnet v2 1024x1204 model.
Colab Object Detection Tutorial Copy
4. Expected behavior
Segmentation on the test images same the as the object detection on the test images in the notebook
5. Additional context
The notebook crashes on the default segmentation model used in the Object Detection Tutorial Notebook as well also models that have been further trained to segment or detect other objects
6. System information
All the work done was on Colab with GPU enabled