Open YKritet opened 5 years ago
How did you get the detection? I am trying to get the detections after using DWT2F to convert yolov2 model. Please help me.
Hey @abhishek1edwin
If you have used DWT2F to convert your model then you definitely have either a .cpkt or a .pb file that represents your model. At this point, you've got to chose to use the checkpoints (if you don't have .pb) or to convert your model to .pb which is kind-of the best format for production purposes. What's more, Tensorflow provides (excessively) copious documentation on how to do.
it goes smth like this for quick test :
tf.reset_default_graph()
graph = tf.Graph()
sess = tf.Session(graph=graph)
tf.saved_model.loader.load(sess, [tf.saved_model.SERVING], "SavedModel")
then use this to determine the name of the input tensor through the operation sthat corresponds to it.
sess.graph.get_operations()
afterwards,
in_t = sess.graph.get_tensor_by_name('name of the input operation:0')
out = sess.graph.get_tensor_by_name('name of the output operation:0')
# image = ... preprocessing
pred = sess.run( out , feed_dict={ in_t : image })
@YKritet
How do you process next. I don't know how the output is encoded. Is it possible for you to share with me the code you have implemented to get the boxes?
check @YunYang1994
yolov3-tiny/net1
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/min
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/max
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/RandomUniform
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/sub
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/mul
yolov3-tiny/convolutional1/kernel/Initializer/random_uniform
yolov3-tiny/convolutional1/kernel I output all nodes in the yoloV3-tiny net, how i get the name input node and output name,thanks
Hi,
I used the DW2TF to convert a Yolov3 model to a .pb model, but after the conversion, the model no longer detects well. The detected boxes' widths are all equal to the width of the input image while the heights take from the third to the whole height of the input image.
I successfully converted a model almost two months ago, it was exactly the same as the current one except for the number of classes (1 before, 7 now)
Have you ever encountered this problem? Do you have any idea how to solve it?
Thank you very much in advance