microsoft / VoTT

Visual Object Tagging Tool: An electron app for building end to end Object Detection Models from Images and Videos.
MIT License
4.28k stars 834 forks source link

Active learning is not working for custom (File path) #846

Open Jenny0932 opened 5 years ago

Jenny0932 commented 5 years ago

When I tried to load the customized model generated from tensorflowjs. I got the 'Error loading activate learning model'.

Steps to reproduce the behavior:

  1. Train a tensorflow SSD model
  2. Convert the saved_model to tensorflowjs using tensorflowjs_converter
  3. Create the classes.json file just like the COCO SSD model
  4. change the model path to my customize path.
  5. Click the active learning button
  6. see the error. image

Desktop (please complete the following information):

Jenny0932 commented 5 years ago

I managed to do it by

  1. Extract the model.tar.gz file to model_extracted folder, then cd to model_extracted folder
  2. pip install tensorflowjs==0.8.6
  3. tensorflowjs_converter --input_format=tf_saved_model --output_json=true --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' --saved_model_tags=serve graph/saved_model web_model
  4. Create the classes.json file to map class ID to class name
  5. zip -r web_model web_model and now you can use the web_model folder in VOTT labelling tool or other web-application.
  6. Note I found class ID 0 is reserved for 'unknow' class
MichaelCVelez commented 5 years ago

@Jenny0932 does this mean you were able to successfully run a custom model in VoTT with the steps above?

Jenny0932 commented 5 years ago

Yes. It's working. One issue is, our class id start from 0 so the first object class is identified as unknow.

lyuweiwang commented 4 years ago

I managed to do it by

  1. pip install tensorflowjs==0.8.6
  2. tensorflowjs_converter --input_format=tf_saved_model --output_json=true --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' --saved_model_tages=serve graph/saved_model web_model
  3. 0 is reserved for 'unknow' class

Hello, I wonder how to know the output_node_names of my model? Thanks

naveenchary121 commented 4 years ago

can anyone help me in creating custom model for vott plzzz

Jenny0932 commented 4 years ago

'Postprocessor/ExpandDims_1,Postprocessor/Slice' just works for my SSD tensorflow model

HRGiri commented 4 years ago

Just wanted to know how the classes.json is created. What is the 'name' attribute in it? screencap

andreyhristov commented 4 years ago

Same prob here...how to generate classes.json . I am using a FasterRCNN/InceptionV2 model. I tried to copy classes.json from the CoCoSSD and leave the 13 classes for my model but I get the error "Error loading active learning model". What is the meaning of the cryptic "name" attribute? To get TF.js I created a small container:

FROM ubuntu:18.04

RUN apt-get update && apt-get install -y --fix-missing --no-install-recommends \
        python3-dev python3-pip \
        wget && apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN pip3 install --upgrade pip
RUN pip3 install virtualenv setuptools
RUN pip3 install tensorflowjs
# Set working directory
WORKDIR "/root/project"
CMD ["/bin/bash"]

Then built it with

docker build . -t tfjs:test

And then started the container with:

docker run -it --rm -v `pwd`/frozen_graph.pb:/frozen_graph.pb -v `pwd`/web_model:/root/project/web_model tfjs:test

The real conversion is done with:

tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='detection_boxes,detection_scores,detection_classes,num_detections'    /frozen_graph.pb web_model
tomp11 commented 4 years ago

@HRGiri @andreyhristov My model worked fine without "name".("name" seems to connect with coco metadata) Below is my "classes.json".

[{"id":1,"displayName":"backlight"},{"id":2,"displayName":"wheel"},{"id":3,"displayName":"numberplate"},{"id":4,"displayName":"mirror"},{"id":5,"displayName":"door"},{"id":6,"displayName":"frontpanel"},{"id":7,"displayName":"rearpanel"},{"id":8,"displayName":"frontbumper"},{"id":9,"displayName":"rearbumper"}]

However, be carefull that index1 will be unknown. This may be a hint.

Sorry for the bad english

JoergPeisker commented 4 years ago

Hi All, I am using YoloV3 from (https://github.com/AntonMu/TrainYourOwnYOLO). I have unfortunately no idea how to bring it in a suitable format to use it in VOTT for active learning. Can somebody help me? Just converting the final.h5 file with Tfjs 1.5.2 is working. But Vott isn't loading it.

tensorflowjs_converter --input_format=keras trained_weights_final.h5 tfjs_model

What am I doing wrong or what do I have to change to make it work?

Thanks in advance for any help

kadijahassanali commented 3 years ago

@Jenny0932 , why do you zip the converted model? VOTT does not recognize that format.

jjmattathil commented 3 years ago

Hi all, I also converted the custom model with tfjs 3.1.0 and created the classes.json file manually. But VoTT showing error while loading the active learning model.

Thanks in advance for any help

worldstar commented 3 years ago

According to the output model in cocoSSDModel, I suppose we should use pip install tensorflowjs==0.8.6 (and tensorflow version < 2) instead of tensorflowjs==3.1 because the output files of tensorflowjs==0.8.6 won't have the sub filename ".bin".

leigh-johnson commented 3 years ago

I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs

The output layer of this model is the non-max-supression op described here: https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph_lib.py#L66

worldstar commented 3 years ago

I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.

leigh-johnson commented 3 years ago

I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.

Ah, sorry about that! @worldstar I probably broke something in my Ghost theme's code highlighter.

Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7

worldstar commented 3 years ago

Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7 This post on medium version is better. Thank you.

worldstar commented 3 years ago

I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.

Ah, sorry about that! @worldstar I probably broke something in my Ghost theme's code highlighter.

Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7 hi @leigh-johnson , I upload revise the program objectDetection.ts in the following repository; however, there are some errors when I manually start the VoTT by npm. Hence, please consider to share your own repository. Thank you.

https://github.com/worldstar/VoTT