Closed fengggli closed 5 years ago
@qoofyk will use the cudnn for convolution as a baseline, and measure the performance. (e.g. for different size of input/kernel size, how many operations of forward/backward can be done).
@zkSNARK will work on CUDA convolution kernel once he finishes the host backpropagation.
@fengggli will work on residual blocks and overall network architecture.
Host implementation of convolution layer is now complete. https://github.com/fengggli/gpu-computing-materials/pull/25
I will try to use the host implement of convolution to construct ResNet(https://github.com/fengggli/gpu-computing-materials/issues/28) first, by the time we have the convolution gpu kernel, I can also switch to gpu implementation.
This is created to track the status of convolution kernel implementation