Closed hughperkins closed 8 years ago
@hughperkins Do you think @karlrupp is OK with adding the source as-is? Wouldn't it be smarter to use a GIT submodule or similar?
Well, I initially was going to do a git submodule (It's what I'm using for your own code), but there are quite a lot of extra bonus files in the viennacl repo, not necessary for production usage.
another solution could be with ExternalProject_add/step to install headers and then remove it
What is the plan? How we want to support others ops/kernels? If we want to increase ops/kernel coverage actually caffe opencl has almost three implementations with clblas, clblast and Vienna. What we want to do here? /cc @CNugteren
@bhack well, short-term 'batteries-included' seems not a bad plan. Longer-term, maybe make the various implementations pluggable, discoverable at runtime?
oh wait, are you saying, ViennaCl, is optional, eg could choose eg CLblast instead?
@hughperkins No. Caffe can use CLBlast, ViennaCL, clBLAS and ISAAC as BLAS for auxiliary functions and im2col/col2im convolutions.
But libdnn needs ViennaCL at the moment (context handling, kernel launching) but not as BLAS (libdnn does not need a BLAS).
But libdnn needs ViennaCL at the moment (context handling, kernel launching)
Ah. Thats a lot of code for just launching kernels. But it sounds like you're migrating to some other system soonish?
@hughperkins It is planned to have the possibility of using libdnn only with pure OpenCL/CUDA without ViennaCL; but it's not the top-priority right now.
But I don't think that we can increase the kernels/ops coverage without a blas implementaton (if we want to port kernel/ops upstream here from caffe opencl branch). We want to have the same approach of opencl caffe branch here?
@bhack Hmmm, that's true. I've been so focused on convolution, I forgot that basic Linear layers simply, I imagine, call into standard blas3 matrix multiplication.
I think that the original @naibaf7 choiche on viennacl was formulated to have device, memory, context etc. This was reivented by many libraries or headers, but he added a dependencies that could give blas ops. But then opencl caffe branch started to support multi blas efforts so device, memory, context and blas are not so correlated anymore. What is the strategy here? Use Vienna only as one of the blas alternative and use something more ligthweigth for bootstrap and management?
@naibaf7 Excuse me if the reconstruction is incorrect or precise.
@bhack This is a very precise description. Going forward in another direction is just getting a bit delayed due to more people involved with libdnn & Caffe now, such as Intel. So I collect more opinions before doing drastic changes.
It is understandable. We need to plan something for GSoC. So while you are trying to collect feedbacks could Edgar propose some API extension mainly to use memory management from Vienna here? Next we want to cover ops for a simple Mnist network here and then some more complex net archs.
@bhack Sure, go ahead.
address https://github.com/naibaf7/libdnn/issues/5