Closed alien35 closed 4 years ago
@alien35 Can you please post how the model was converted, etc? On the issue just referenced #2254, I see NonMaxSuppression come across, but not NonMaxSuppressionV5 so some extra details would help (along with model or at least model.json) - otherwise it's hard to reproduce.
@alien35 Can you please add enough details to reproduce the issue?
@alien35 can you also specify all library versions you are using, including TF and TFJS. thanks.
Hi I also used mobileNetv3 small and trained it by quantization, then converted it to tensorflowjs by tensorflow converter. when calling the model in tfjs, I am getting the error:
Uncaught (in promise) TypeError: Unknown op 'NonMaxSuppressionV4'. File an issue at https://github.com/tensorflow/tfjs/issues so we can add it, or register a custom execution with tf.registerOp()
at operation_executor.ts:95
at mb (operation_executor.ts:52)
at p (graph_executor.ts:362)
at t.processStack (graph_executor.ts:348)
at t.
The object detection model is trained with quantization, and I have converted it from the checkpoints with the commands:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path
tensorflowjs_converter saved_model/
I am using tensorflow 1.15.0
Same issue for me. Same actions to convert the model. let me know
skip through operation checkings and replace post processing nms by
tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_format=tfjs_graph_model \
--output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' \
--skip_op_check \
./frozen_inference_graph.pb \
./web_model
Then I can use cpu version of nms something like
tf.image.nonMaxSuppression(boxes2, maxScores, maxNumBoxes, 0.5, 0.5);
in the tfjs code.
Tested against tensorflow 1.15.0, works fine by far.
I had try:
tensorflow 1.15 tensorflowjs 1.4
I create a custom object detection from "ssdlite_mobilenet_v2_coco_2018_05_09" .i Have exported with:
python object_detection/export_inference_graph.py \ --add_postprocessing_op true \ --input_type image_tensor \ --pipeline_config_path output/ai/annotations/ssdlite_mobilenet_v2_coco_2018_05_09.config \ --trained_checkpoint_prefix output/ai/training/model.ckpt-$1 \ --output_directory output/ai/exported-model
and tested in python. It's works. After that i convert the model to tfjs with this:
tensorflowjs_converter \ --input_format=tf_saved_model \ --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' \ --saved_model_tags=serve \ --skip_op_check \ --output_format=tfjs_graph_model \ output/ai/exported-model/saved_model \ output/ai/exported-model-web
and i have the same issue:
UnhandledPromiseRejectionWarning: TypeError: Unknown op 'NonMaxSuppressionV5'.
the code that i use in nodejs is:
const model = await cocossd.load({modelUrl:'file://output/ai/exported-model-web/model.json'}) const predictions = await model.classify(input)
*input is a tensor4d create from an image.
@federicolucca try convert from tf_frozen_model https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd#technical-details-for-advanced-users
Hi @federicolucca and @alien35, sorry for the issue you are experiencing, this is because the Op 'NonMaxSuppressionV5' was not supported in previous versions. We just added the support in 1.5.1, see details here. Please update your tensorflowjs version to 1.5.1 and try again. This should solve the problem.
@lina128 Don't worry , i found it and i tried to resolve it by my self with a custom version of 1.4.0 . @McDo I will try the frozen model with 1.5.1 asap Thanks
@lina128 I'm hitting @shnamin 's exact issue / error with NonMaxSuppressionV4.
The model is converted via the following on tfjs 1.7.4 (I have tried converting the saved model also) :-
tensorflowjs_converter --input_format=tf_frozen_model \
--output_node_names='detection_boxes,detection_classes,detection_multiclass_scores,detection_scores,num_detections,raw_detection_boxes,raw_detection_scores' \
--saved_model_tags=serve \
--skip_op_check \
--output_format=tfjs_graph_model \
gs://path/to/input/frozen_inference_graph.pb \
gs://path/to/output/tfjs_model
On the of chance this isn't a conversion issue and it's an inference related issue: I'm using the react native API and utilising the camera-with-tensors module, to run inference.
Any ideas?
Hi @Agiledom, we can add V4 support. You can track the work here. https://github.com/tensorflow/tfjs/issues/2450 We can target next week release.
Hey @lina128 - that would be awesome! Thank you so much! I will track @ #2450
To get help from the community, we encourage using Stack Overflow and the
tensorflow.js
tag.TensorFlow.js version
I'm using
Browser version
Chrome 78
Describe the problem or feature request
I'm trying to run an object detection model created in tensorflow, and I am getting this error:
Code to reproduce the bug / link to feature request
If you would like to get help from the community, we encourage using Stack Overflow and the
tensorflow.js
tag.GitHub issues for this repository are tracked in the tfjs union repository.
Please file your issue there, following the guidance in that issue template.