Open brianantonelli opened 7 years ago
@brianantonelli hmmm. That seems weird. My only doubt is that which yolo version are you using? The smaller one is tiny-yolo v1 I believe. And it seems you are using the 2nd version.
@brianantonelli did you resolve this problem? I faced the same issue
@KleinYuan could you give me a step-by-step solution to make .pb file for using in this repo. im using darkflow as your suggestion. i have spent a couple of days but still cannot get any success. Thanks in advantage!
@macro-dadt Sure I am happy to do that. Before doing that, could you elaborate which specific step you get stuck? Darkflow should be very straightforward and only trick is that make sure you use the correct version of tiny-yolo (v1).
thank you for your quick reply. this is what i did:
flow --model cfg/v1/yolo-tiny2c.cfg --load bin/tiny-yolo-voc.weights --train --annotation train/v1/Annotations --dataset train/v1/Images
bazel-bin/tensorflow/python/tools/freeze_graph \ --input_graph=/Users/macro/yolo-tiny2c.pb \ --input_checkpoint=/Users/macro/yolo-tiny2c.pb-1000 \ --output_node_names=output \ --input_binary \ --output_graph=graph.pb
i got this error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 51: invalid start byte
. i use tensorflow 1.3.0 and python 3.6
Did i do somethings wrong? Im very new in this area please help me!!!
Thanks a lot in advantage @macro-dadt this error usually occurs when you are using a different version of tensorflow to freeze a graph. Have you tried different versions? And also the bazel-bin/tensorflow version may not be the same as your default python's tensorflow, which run previous steps. It seems a darkflow issue. However, if I were you, I will just hack the function in darkflow which save the model with retrieving the tensors you need. It's just tensorflow, right? You can do it easily buddy. Example: https://github.com/KleinYuan/cnn/blob/master/tools/graph_freezer.py#L20
I used the commands listed by @brianantonelli to convert my yolo v1 model (.cfg and .weights) into a frozen graph and its meta file
flow --model tiny-yolo-voc.cfg --load tiny-yolo-voc.weights --savepb --gpu 0.9
I have 2 questions:
cv2.error: OpenCV(3.4.2) /io/opencv/modules/dnn/src/dnn.cpp:401: error: (-2:Unspecified error) Can't create layer "truediv" of type "RealDiv" in function 'getLayerInstance'
If anybody has faced the above 2 problems, or knows a way to solve it, I would really appreciate the help.
First off, thank you for putting this repo together!
In preparation for training my own model on YOLO and converting to TF I wanted to prove out the pattern by taking the TinyYOLO VOC weights/config and recreating the frozen memmapped graph that you provide in this repo. After going through the Darkflow docs as well as Tensorflow for Mobile Poets I came up with the following process for converting the graph. The graph loads and runs on my iPhone; however, it seems to just randomly identify non-objects (typically 20-30 per second) which inevitably runs out of memory and crashes the application. I'll provide the steps I took for converting the graph below. Could you please share how you created your graph or provide some feedback?
BTW, the first big red flag I see is when I freeze the TinyYOLO VOC weights using Darkflow my graph is 61MB, but yours is 180MB?!
Here's my process:
Grab TinyYOLO VOC weights and config (I've used PJ's config as well as yours, there are a few differences):
Next I validate that the weights/configs work before freezing:
Next I freeze the graph and validate it still identifies the objects (everything works great):
Then I optimize the graph for inference and test again (still works):
Then I quantize and round the graph and validate (still works, but some accuracy drops as expected):
Finally I enable memory mapping in the graph:
I've tried cutting out steps such as quantizing and memory mapping, but I still run into the same issue where it just randomly identifies non-existent objects. Here's an example from the iOS logs of it just seeing tons of non-existent objects:
Any help you can provide is greatly appreciated! Thanks! 👍