Hi!
First of all congratulations for the perfect work!
I have some questions about CLBlast kernels execution and tuning:
1) Can I tune the CLBlast library? If yes, how? I observe that I cannot do that in the clasic way like in the original CLBlast library.
2) Lets say I have already tuned my xdot opencl kernel and I have found out that the best config is WGS1=32 and WGS2=64. The only thing I have to do is to modify the tf-coriander/third_party/coriander/src/CLBlast/src/database/kernels/xdot.hpp with the best configuration I have found?
3) How can I know which CLBlast kernel runs whenever I run a python script (which function call which kernel)?
4) The python scripts in the directory tensorflow/stream_executor/cl/test are the only scripts that run CLBlast kernels on GPU? If not which functions call the CLBlast kernels as well?
Hi! First of all congratulations for the perfect work!
I have some questions about CLBlast kernels execution and tuning:
1) Can I tune the CLBlast library? If yes, how? I observe that I cannot do that in the clasic way like in the original CLBlast library. 2) Lets say I have already tuned my xdot opencl kernel and I have found out that the best config is WGS1=32 and WGS2=64. The only thing I have to do is to modify the tf-coriander/third_party/coriander/src/CLBlast/src/database/kernels/xdot.hpp with the best configuration I have found? 3) How can I know which CLBlast kernel runs whenever I run a python script (which function call which kernel)? 4) The python scripts in the directory tensorflow/stream_executor/cl/test are the only scripts that run CLBlast kernels on GPU? If not which functions call the CLBlast kernels as well?
Thank you very much in advance