PaddlePaddle / Anakin

High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
https://anakin.baidu.com/
Apache License 2.0
532 stars 135 forks source link

The running results of Anakin on GPU are not stable #474

Open Weyne168 opened 6 years ago

Weyne168 commented 6 years ago

we use Anakin to run a model converted from caffe. we found that, the output of Anakin is not stable comparing the caffe.

with the same input, we can get very stable output on nvidia-GPU by pycaffe, but when we use Anakin run the same model on the same GPU, the output often change.

sometimes, the output of Anakin is the same as the output of caffe, but in more often situation, the output of Anakin is changing and is not correct.

we wonder that what is factor cause this phenomenon?

MyPandaShaoxiang commented 5 years ago

can you provide the model name?

Weyne168 commented 5 years ago

our model likes resnet18. it only has conv and fc layers, its activation is not relu but prelu. I found that the graph.Optimize may have bug. our model does not contain any bn or deconv layers, but the log of fusing operations print DeconvReleu, ConvBatchnormScaleRelu

MyPandaShaoxiang commented 5 years ago

Thanks for your issue. The graph.optimize will check all the possible fusion patterns registed in code. when model has not some layers contained in fusion pattern, it does nothing with this pattern, but our log will print this process info. Maybe it's cuased by prelu, you can check it in saber/funcs/cuda/saber_conv and we will check it too.