Open anshkumar opened 5 years ago
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. What is the top-level directory of the model you are using Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce
OS: "Debian GNU/Linux 9 (stretch)" TensorFlow version: '1.13.1' TensorFlow installed from: pip3 CUDA Version: 10.0 GPU model: Tesla K80 GPU memory: 11441MiB Exact command to reproduce: python export_inference_graph \ --input_type image_tensor \ --pipeline_config_path path/to/ssd_inception_v2.config \ --trained_checkpoint_prefix path/to/model.ckpt \ --output_directory path/to/exported_model_directory
Hello,
I had this problem. Training seems to work with RGB and opencv seems to open images in BGR.
Adding this solved the problem for me.
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
Hope it helps!
Did you find out any solution, I face the simillar issue
It turned out the model was expecting RGB and opencv grab the frames from my camera in BGR. Hope this helps.
Best regards, Lorién Berné Flight Labs +34 659061852 www.flightlabs.es lorienberne@flightlabs.es https://www.facebook.com/flightlabs
El vie., 10 ene. 2020 a las 10:01, gurucharansrinivas (< notifications@github.com>) escribió:
Did you find out any solution, I face the simillar issue
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/7482?email_source=notifications&email_token=ACAOYXZKYCEXGQQFV24XJMLQ5A2QDA5CNFSM4IOIGMIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEITFT4Q#issuecomment-572938738, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACAOYX5XQGBIX3UYKSWTZDDQ5A2QDANCNFSM4IOIGMIA .
any news on this having the same issue, works well on eval, model_main_tf2.py but when using exporter_main_tf2.py the resultant saved_model does not perform
I've trained the
ssd_mobilenet_v2_coco
for my custom dataset, but the problem is that during the training tensorbaord shows detection in the image. It's something like this:But when I export the graph and do inference I'm not detecting anything in the image (same image as training). For exporting the graph I'm using this script. The exporter generates the following files:
When I use the
frozen_inference_graph.pb
for inference I'm not getting detection in the images (even the same images which I used during training ). The code which I used for inference is this:The
scores
after running inference for a sample image is as follows:But instead of using
frozen_inference_graph.pb
if I usesaved_model/saved_model.pb
with the following code I'm getting all the detections:The scores after running inference is now:
I don't know why is it happening, but using
saved_model/saved_model.pb
has reduced the inference speed but accuracy is good. Also, I want to export the model to TensorRT model. But when usingsaved_model/saved_model.pb
with thetf.contrib.predictor.from_saved_model()
, I'm simply calling the predictor function; I don't know how to export it to RT graph then.