Open Jenny0932 opened 5 years ago
I managed to do it by
@Jenny0932 does this mean you were able to successfully run a custom model in VoTT with the steps above?
Yes. It's working. One issue is, our class id start from 0 so the first object class is identified as unknow.
I managed to do it by
- pip install tensorflowjs==0.8.6
- tensorflowjs_converter --input_format=tf_saved_model --output_json=true --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' --saved_model_tages=serve graph/saved_model web_model
- 0 is reserved for 'unknow' class
Hello, I wonder how to know the output_node_names of my model? Thanks
can anyone help me in creating custom model for vott plzzz
'Postprocessor/ExpandDims_1,Postprocessor/Slice' just works for my SSD tensorflow model
Just wanted to know how the classes.json is created. What is the 'name' attribute in it?
Same prob here...how to generate classes.json . I am using a FasterRCNN/InceptionV2 model. I tried to copy classes.json from the CoCoSSD and leave the 13 classes for my model but I get the error "Error loading active learning model". What is the meaning of the cryptic "name" attribute? To get TF.js I created a small container:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y --fix-missing --no-install-recommends \
python3-dev python3-pip \
wget && apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
RUN pip3 install virtualenv setuptools
RUN pip3 install tensorflowjs
# Set working directory
WORKDIR "/root/project"
CMD ["/bin/bash"]
Then built it with
docker build . -t tfjs:test
And then started the container with:
docker run -it --rm -v `pwd`/frozen_graph.pb:/frozen_graph.pb -v `pwd`/web_model:/root/project/web_model tfjs:test
The real conversion is done with:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='detection_boxes,detection_scores,detection_classes,num_detections' /frozen_graph.pb web_model
@HRGiri @andreyhristov My model worked fine without "name".("name" seems to connect with coco metadata) Below is my "classes.json".
[{"id":1,"displayName":"backlight"},{"id":2,"displayName":"wheel"},{"id":3,"displayName":"numberplate"},{"id":4,"displayName":"mirror"},{"id":5,"displayName":"door"},{"id":6,"displayName":"frontpanel"},{"id":7,"displayName":"rearpanel"},{"id":8,"displayName":"frontbumper"},{"id":9,"displayName":"rearbumper"}]
However, be carefull that index1 will be unknown. This may be a hint.
Sorry for the bad english
Hi All, I am using YoloV3 from (https://github.com/AntonMu/TrainYourOwnYOLO). I have unfortunately no idea how to bring it in a suitable format to use it in VOTT for active learning. Can somebody help me? Just converting the final.h5 file with Tfjs 1.5.2 is working. But Vott isn't loading it.
tensorflowjs_converter --input_format=keras trained_weights_final.h5 tfjs_model
What am I doing wrong or what do I have to change to make it work?
Thanks in advance for any help
@Jenny0932 , why do you zip the converted model? VOTT does not recognize that format.
Hi all, I also converted the custom model with tfjs 3.1.0 and created the classes.json file manually. But VoTT showing error while loading the active learning model.
Thanks in advance for any help
According to the output model in cocoSSDModel, I suppose we should use pip install tensorflowjs==0.8.6 (and tensorflow version < 2) instead of tensorflowjs==3.1 because the output files of tensorflowjs==0.8.6 won't have the sub filename ".bin".
I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs
The output layer of this model is the non-max-supression op described here: https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph_lib.py#L66
I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.
I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.
Ah, sorry about that! @worldstar I probably broke something in my Ghost theme's code highlighter.
Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7
Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7 This post on medium version is better. Thank you.
I had to apply a patch to use a uint8 quantized model with the Addv2 operation. Here's an example of the patch I applied: https://www.bitsy.ai/automate-bounding-box-annotation-with-tensorflow-and-automl/#patch-vott-to-fix-tensorflow-1-x-2-x-bugs Do you mind to put a complete program of objectDetection.ts on your website or Gist instead of a figure? Thank you.
Ah, sorry about that! @worldstar I probably broke something in my Ghost theme's code highlighter.
Here's the medium version: https://towardsdatascience.com/budget-automation-for-bounding-box-annotation-500a76b4deb7 hi @leigh-johnson , I upload revise the program objectDetection.ts in the following repository; however, there are some errors when I manually start the VoTT by npm. Hence, please consider to share your own repository. Thank you.
When I tried to load the customized model generated from tensorflowjs. I got the 'Error loading activate learning model'.
Steps to reproduce the behavior:
Desktop (please complete the following information):