oneapi-src / oneDNN

oneAPI Deep Neural Network Library (oneDNN)
https://uxlfoundation.org
Apache License 2.0
3.6k stars 992 forks source link

Proposal: testing mkldnn with KLEE LLVM Execution Engine #439

Closed ghost closed 5 years ago

ghost commented 5 years ago

Greetings,

what do you think about the idea of testing mkldnn using KLEE?

In theory, by compiling benchdnn as llvm bitcode with klee_make_symbolic() enabled on certain inputs and executing the bitcode file using KLEE, we can generate batch of test cases automatically. Optimistically, KLEE could analysis the reference implementation ref_conv.cpp / ref_rnn.cpp / etc and generate test cases covering lots of edge cases and code branches. We can collect the test cases and replay on different environment, verify the consistency between the reference implementation and the platform-dependent optimizated implementations.

We could push this even further, analysis different platform-dependent optimized implementations using KLEE by applying KLEE to the llvm bitcode of libmkldnn.

I was hoping KLEE could help us discover tricky bug like https://github.com/intel/mkl-dnn/issues/431, so I spent some time tweaking mkldnn Makefiles and generated a llvm bitcode file for benchdnn, scaned benchdnn.bc using KLEE, unfortunately it failed pretty early on ERROR: Code generator does not support intrinsic function 'llvm.fabs.f32'!". Apparently KLEE does not support fabs yet: https://www.mail-archive.com/klee-dev@imperial.ac.uk/msg02812.html

There might be a little bit hope that KLEE-float could bring us further, I haven't try yet, but I would guess there would be some other intrinsic not supported yet, which would take some effort to support but not impossible.

Has anyone tried or thought about KLEE? Any experience to share? Is it worth to push further?

Alternatively, QuickCheck-like method as rapidcheck might be interesting as well.

In theory there is no difference between theory and practice. But, in practice, there is.

:)

rsdubtso commented 5 years ago

Hi @fracting. Sorry for a delay. This looks like an interesting idea, but no one on the MKL-DNN team has experience with KLEE. If there is a way to generate e.g. benchdnn cases for convolutions using it, it would be really helpful.

vpirogov commented 5 years ago

Closing due to lack of activity.

ghost commented 5 years ago

@rsdubtso @vpirogov Thanks for the feedback. I tried to build klee-float but failed as https://github.com/srg-imperial/klee-float/issues/2, I'll come back and update if there is any progress. In the meanwhile let's keep this issue closed.

ghost commented 5 years ago

Update: I successfully compiled klee-float and was able to test a simple floating number hello world program using klee-float.

However, klee-float depends on llvm-3.4.2, unfortunately such an old version of clang/llvm does not support openmp and wasn't able to compile mkl-dnn. I think the right direction is to port klee-float to recent llvm version, which no one is working on currently. I would revisit this issue in a few months and see if there's any update from klee-float side, if not I might try to port klee-float.