thtrieu / darkflow

Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices
GNU General Public License v3.0
6.14k stars 2.08k forks source link

Tensorflow Serving #403

Open ldrabeck opened 7 years ago

ldrabeck commented 7 years ago

Has anyone figured out how to serve the darkflow model with tensorflow serving?

vilen commented 7 years ago

I've tried to use Tensorflow Serving with the .pb and .meta files from Darkflow in a folder like this: my-model/ my-model/1/ my-model/1/my-model.pb my-model/1/my-model.meta

I get the following output when running Tensorflow Serving: `2017-09-22 09:03:33.111215: I tensorflow_serving/core/basic_manager.cc:705] Successfully reserved resources to load servable {name: my-model version: 1}

2017-09-22 09:03:33.111271: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: my-model version: 1}

2017-09-22 09:03:33.111323: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: my-model version: 1}

2017-09-22 09:03:33.111460: E tensorflow_serving/util/retrier.cc:38] Loading servable: {name: my-model version: 1} failed: Not found: Session bundle or SavedModel bundle not found at specified export location`

I'm running Serving with the following command: 'tensorflow_model_server --port=11000 --model_name="my-model" --model_base_path=/abs_path/my-model/

I'm not sure if it is a format issue or something else. I'm investigating it, but since I'm new to Tensorflow my progress is currently slow.

tlindener commented 7 years ago

I think it is a problem with the saved model: https://www.tensorflow.org/programmers_guide/saved_model DarkFlow doesn't use the SaveModelBuilder https://www.tensorflow.org/api_docs/python/tf/saved_model/builder I'm having the same problem but I'm not sure how to solve that best.

vilen commented 7 years ago

I was able to save the Darkflow model as SavedModel (the new format required by Tensorflow Serving) by adding the following function to net/build.py and then calling it during save or loading of the original model: https://gist.github.com/vilen/0f3dc20f4de078e063fd4db4116b194f

Here is a client for Tensorflow Serving (based on inception_client.py example) that is able to do request-response with the server: https://gist.github.com/vilen/ad59c8bc769db06e53b877ec763d71d1

I'm not sure if the Tensor object sent by the client request is in the correct format. Also, I haven't figured out what to do with the response, which is a (32, 32, 30) shape Tensor. It looks like the final output of Darkflow model still needs further processing to get the bounding boxes, based on the code here. Perhaps the Darkflow model could be extended with Tensors that do that?

lintangsutawika commented 6 years ago

I'm able to get the box output through manually doing post-processing inspired by predict.py for the images. Also, the output of the final layer should be (13,13,30) which is fixed by adjusting the resize from (1024,1024) to (416,416).

Unfortunately, I'm still stuck on how to do the post-processing without having to build the entire network on the client side (Which obviously defeats the purpose of TF Serving) because I need to use boxes = tfnet.framework.findboxes(image). I'm thinking of transfering the FLAGS and meta through the server outputs like adding flag = tf.convert_to_tensor(str(tfnet.FLAGS)) and meta = tf.convert_to_tensor(str(tfnet.meta)).


from grpc.beta import implementations
import tensorflow as tf

from darkflow.net.build import TFNet
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2

import numpy as np
import sys
import cv2

tf.app.flags.DEFINE_string('server', 'localhost:9000',
                           'PredictionService host:port')
tf.app.flags.DEFINE_string('image', sys.argv[1], sys.argv[1])
FLAGS = tf.app.flags.FLAGS

options = {"model": 'yolo-bosnet.cfg', "load": 'yolo-bosnet_2000.weights', "threshold": 0.5}
tfnet = TFNet(options)

def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    data = cv2.imread(sys.argv[1])
    data = cv2.resize(data, (416, 416))
    data = data / 255.
    data = data[:,:,::-1]
    data = data.astype(np.float32)
    data = np.expand_dims(data, 0)
    dim = np.shape(data)
    data = tf.contrib.util.make_tensor_proto(data, shape=[dim[0], dim[1], dim[2], dim[3]])

    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'my-model' # Export directory of SavedModel
    request.model_spec.signature_name = "predict"

    request.inputs['input'].CopyFrom(data)
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    image = tf.contrib.util.make_ndarray(result.outputs['output']) 
    flag = tf.contrib.util.make_ndarray(result.outputs['flag']) 
    meta = tf.contrib.util.make_ndarray(result.outputs['meta']) 

    image = np.squeeze(image)
    boxes = tfnet.framework.findboxes(image)
    # print(boxes)
    h = dim[0]
    w = dim[1]
    threshold = tfnet.FLAGS.threshold
    boxesInfo = list()
    for box in boxes:
        tmpBox = tfnet.framework.process_box(box, h, w, threshold)
        if tmpBox is None:
            continue
        boxesInfo.append({
            "label": tmpBox[4],
            "confidence": tmpBox[6],
            "topleft": {
                "x": tmpBox[0],
                "y": tmpBox[2]},
            "bottomright": {
                "x": tmpBox[1],
                "y": tmpBox[3]}
        })

    for prediction in boxesInfo:
        print(prediction)

if __name__ == '__main__':
  tf.app.run()
gauravgola96 commented 5 years ago

I'm able to get the box output through manually doing post-processing inspired by predict.py for the images. Also, the output of the final layer should be (13,13,30) which is fixed by adjusting the resize from (1024,1024) to (416,416).

Unfortunately, I'm still stuck on how to do the post-processing without having to build the entire network on the client side (Which obviously defeats the purpose of TF Serving) because I need to use boxes = tfnet.framework.findboxes(image). I'm thinking of transfering the FLAGS and meta through the server outputs like adding flag = tf.convert_to_tensor(str(tfnet.FLAGS)) and meta = tf.convert_to_tensor(str(tfnet.meta)).


from grpc.beta import implementations
import tensorflow as tf

from darkflow.net.build import TFNet
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2

import numpy as np
import sys
import cv2

tf.app.flags.DEFINE_string('server', 'localhost:9000',
                           'PredictionService host:port')
tf.app.flags.DEFINE_string('image', sys.argv[1], sys.argv[1])
FLAGS = tf.app.flags.FLAGS

options = {"model": 'yolo-bosnet.cfg', "load": 'yolo-bosnet_2000.weights', "threshold": 0.5}
tfnet = TFNet(options)

def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    data = cv2.imread(sys.argv[1])
    data = cv2.resize(data, (416, 416))
    data = data / 255.
    data = data[:,:,::-1]
    data = data.astype(np.float32)
    data = np.expand_dims(data, 0)
    dim = np.shape(data)
    data = tf.contrib.util.make_tensor_proto(data, shape=[dim[0], dim[1], dim[2], dim[3]])

    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'my-model' # Export directory of SavedModel
    request.model_spec.signature_name = "predict"

    request.inputs['input'].CopyFrom(data)
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    image = tf.contrib.util.make_ndarray(result.outputs['output']) 
    flag = tf.contrib.util.make_ndarray(result.outputs['flag']) 
    meta = tf.contrib.util.make_ndarray(result.outputs['meta']) 

    image = np.squeeze(image)
    boxes = tfnet.framework.findboxes(image)
    # print(boxes)
    h = dim[0]
    w = dim[1]
    threshold = tfnet.FLAGS.threshold
    boxesInfo = list()
    for box in boxes:
        tmpBox = tfnet.framework.process_box(box, h, w, threshold)
        if tmpBox is None:
            continue
        boxesInfo.append({
            "label": tmpBox[4],
            "confidence": tmpBox[6],
            "topleft": {
                "x": tmpBox[0],
                "y": tmpBox[2]},
            "bottomright": {
                "x": tmpBox[1],
                "y": tmpBox[3]}
        })

    for prediction in boxesInfo:
        print(prediction)

if __name__ == '__main__':
  tf.app.run()

@lintangsutawika ANY SUCCESS ???

gauravgola96 commented 5 years ago

@lintangsutawika this might be helpful I have converted cython codes into python.

from darkflow.utils.box import BoundBox
from scipy.special import expit
from math import exp

def overlap_c(x1, w1, x2, w2):
    l1 = x1 - w1 / 2.
    l2 = x2 - w2 / 2.
    left = max(l1, l2)
    r1 = x1 + w1 / 2.
    r2 = x2 + w2 / 2.
    right = min(r1, r2)
    return right - left

def box_intersection_c(ax, ay, aw, ah, bx, by, bw, bh):
    w = overlap_c(ax, aw, bx, bw)
    h = overlap_c(ay, ah, by, bh)
    if w < 0 or h < 0: return 0
    area = w * h
    return area

def box_union_c(ax, ay, aw, ah, bx, by, bw, bh):
    i = box_intersection_c(ax, ay, aw, ah, bx, by, bw, bh)
    u = aw * ah + bw * bh - i
    return u

def box_iou_c(ax, ay, aw, ah, bx, by, bw, bh):
    return box_intersection_c(ax, ay, aw, ah, bx, by, bw, bh) / box_union_c(ax, ay, aw, ah, bx, by, bw, bh);

def NMS(final_probs, final_bbox):
    boxes = list()
    indices = set()
    pred_length = final_bbox.shape[0]
    class_length = final_probs.shape[1]
    for class_loop in range(class_length):
        for index in range(pred_length):
            if final_probs[index, class_loop] == 0: continue
            for index2 in range(index + 1, pred_length):
                if final_probs[index2, class_loop] == 0: continue
                if index == index2: continue
                if box_iou_c(final_bbox[index, 0], final_bbox[index, 1], final_bbox[index, 2], final_bbox[index, 3],
                             final_bbox[index2, 0], final_bbox[index2, 1], final_bbox[index2, 2],
                             final_bbox[index2, 3]) >= 0.4:
                    if final_probs[index2, class_loop] > final_probs[index, class_loop]:
                        final_probs[index, class_loop] = 0
                        break
                    final_probs[index2, class_loop] = 0

            if index not in indices:
                bb = BoundBox(class_length)
                bb.x = final_bbox[index, 0]
                bb.y = final_bbox[index, 1]
                bb.w = final_bbox[index, 2]
                bb.h = final_bbox[index, 3]
                bb.c = final_bbox[index, 4]
                bb.probs = np.asarray(final_probs[index, :])
                boxes.append(bb)
                indices.add(index)
    return boxes

def box_contructor(meta,out,threshold ):
    H, W, _ = meta['out_size']
    C = meta['classes']
    B = meta['num']
    net_out = out.reshape([H, W, B, int(out.shape[2] / B)])

    Classes = net_out[:, :, :, 5:]

    Bbox_pred = net_out[:, :, :, :5]
    probs = np.zeros((H, W, B, C), dtype=np.float32)

    anchors = np.asarray(meta['anchors'])

    for row in range(H):
        for col in range(W):
            for box_loop in range(B):
                arr_max = 0
                sum = 0;
                Bbox_pred[row, col, box_loop, 4] = expit(Bbox_pred[row, col, box_loop, 4])
                Bbox_pred[row, col, box_loop, 0] = (col + expit(Bbox_pred[row, col, box_loop, 0])) / W
                Bbox_pred[row, col, box_loop, 1] = (row + expit(Bbox_pred[row, col, box_loop, 1])) / H
                Bbox_pred[row, col, box_loop, 2] = exp(Bbox_pred[row, col, box_loop, 2]) * anchors[2 * box_loop + 0] / W
                Bbox_pred[row, col, box_loop, 3] = exp(Bbox_pred[row, col, box_loop, 3]) * anchors[2 * box_loop + 1] / H
                # SOFTMAX BLOCK, no more pointer juggling

                for class_loop in range(C):
                    arr_max = max(arr_max, Classes[row, col, box_loop, class_loop])

                for class_loop in range(C):
                    Classes[row, col, box_loop, class_loop] = exp(Classes[row, col, box_loop, class_loop] - arr_max)
                    sum += Classes[row, col, box_loop, class_loop]

                for class_loop in range(C):
                    tempc = Classes[row, col, box_loop, class_loop] * Bbox_pred[row, col, box_loop, 4] / sum

                    if (tempc > threshold):
                        probs[row, col, box_loop, class_loop] = tempc

    return NMS(np.ascontiguousarray(probs).reshape(H * W * B, C),np.ascontiguousarray(Bbox_pred).reshape(H * B * W, 5))

def round_int(x):
    if x == float("inf") or x == float("-inf"):
        return int(0) # or x or return whatever makes sense
    return int(round(x))

def process_box(b, h, w, threshold,meta):
    max_indx = np.argmax(b.probs)
    max_prob = b.probs[max_indx]
    label = meta['labels'][max_indx]
    if max_prob > threshold:
        left  = round_int((b.x - b.w/2.) * w)
        right = round_int((b.x + b.w/2.) * w)
        top   = round_int((b.y - b.h/2.) * h)
        bot   = round_int((b.y + b.h/2.) * h)
        if left  < 0   : left = 0
        if right > w - 1: right = w - 1
        if top   < 0    :   top = 0
        if bot   > h - 1:   bot = h - 1
        mess = '{}'.format(label)
        return mess
    return None
_____________________________________________________________________
So I can directly box_contruct and process_box
below code is modification to the above return_predict()

image = np.squeeze(image)
boxes = **box_contruct**(meta,image,threshold)
threshold = ""
boxesInfo = list()
for box in boxes:
        tmpBox = **process_box**(box, h, w, threshold,meta)
        if tmpBox is None:
            continue
        boxesInfo.append({
            "label": tmpBox[4],
            "confidence": tmpBox[6],
            "topleft": {
                "x": tmpBox[0],
                "y": tmpBox[2]},
            "bottomright": {
                "x": tmpBox[1],
                "y": tmpBox[3]}
        })
gauravgola96 commented 5 years ago

I was able to save the Darkflow model as SavedModel (the new format required by Tensorflow Serving) by adding the following function to net/build.py and then calling it during save or loading of the original model: https://gist.github.com/vilen/0f3dc20f4de078e063fd4db4116b194f

Here is a client for Tensorflow Serving (based on inception_client.py example) that is able to do request-response with the server: https://gist.github.com/vilen/ad59c8bc769db06e53b877ec763d71d1

I'm not sure if the Tensor object sent by the client request is in the correct format. Also, I haven't figured out what to do with the response, which is a (32, 32, 30) shape Tensor. It looks like the final output of Darkflow model still needs further processing to get the bounding boxes, based on the code here. Perhaps the Darkflow model could be extended with Tensors that do that?

Thanks it worked for me