jeng1220 / KerasToTensorRT

This is a simple demonstration for running Keras model model on Tensorflow with TensorRT integration(TFTRT) or on TensorRT directly without invoking "freeze_graph.py".
67 stars 23 forks source link

tensorrt doesn't accelerate #5

Open AirFishWang opened 5 years ago

AirFishWang commented 5 years ago

Thanks for your demo, when I run the raw code, it looks work greatly, in order to verify the effect of tensorrt more precise, I add "warm-up" code before infer the graph. eg:

tf_engine = TfEngine(frozen_graph)
for i in range(warm_up):
    y_tf = tf_engine.infer(x_test)
t0 = time.time()
y_tf = tf_engine.infer(x_test)
t1 = time.time()
print('Tensorflow time', t1 - t0)
verify(y_tf, y_keras)

tftrt_engine = TftrtEngine(frozen_graph, batch_size, 'FP32')
for i in range(warm_up):
    y_tftrt = tftrt_engine.infer(x_test)
t0 = time.time()
y_tftrt = tftrt_engine.infer(x_test)
t1 = time.time()
print('TFTRT time', t1 - t0)
verify(y_tftrt, y_keras)

the test result show that tensorrt doesn't accelerate compared with the tensorflow infer process, I don't know why,