ShannonAI / service-streamer

Boosting your Web Services of Deep Learning Applications.
Apache License 2.0
1.22k stars 187 forks source link

Could U please give a TensorFlow model example? #68

Closed yjiangling closed 4 years ago

yjiangling commented 4 years ago

Hi, @Meteorix , Could you please give a tensorflow example? For example, here is a pb model named "model.pb" and the input tensor is "input_x:0" and the output tensor is "logits:0", how can I use service-streamer to encapsulate it? The following is the code of the model prediction:

asr_path='./model/model.pb' def load_model(model_path):

sess=tf.InteractiveSession()
with tf.gfile.GFile(model_path,'rb') as f:
    graph_def=tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def,name='')
tf_input=sess.graph.get_tensor_by_name('input_x:0')
tf_output=sess.graph.get_tensor_by_name('logits:0')
return tf_input,tf_output,sess

class AsrModel(object):

def __init__(self,model_path=asr_path):
    self.model_path=model_path
    self.inputs,self.outputs,self.sess=load_model(self.model_path)

def predict(self,batch:List[str]) -> List[str]:
    """decode the wave file"""
    batch_inputs=list()
    for file in batch:
        spectrogram=get_spectrogram(file)
        batch_inputs.append(spectrogram)
    batch_outputs=self.sess.run([self.outputs],feed_dict={self.inputs:batch_inputs})[0]
    return batch_outputs

class ManagedAsrModel(ManagedModel):

def init_model(self):
    self.model=AsrModel()

def predict(self,batch):
    return self.model.predict(batch)

But I can never successfully realizztion the batch infer function. As you can see, the TensorFlow model will use GPU memory when it was loaded, so how can I assign the cuda devices? I've struggled in this problem for many days, could you please give me some help? Thanks a lot.