Closed SkibidiProduction closed 4 months ago
I made some changes while testing:
Instead of using cudaMalloc, cudaFree, use the methods available in gpu_alloc.h
(vmalloc, vfree). This allows us to detect leaks when the NDARRAY_VCHECK environment variable is set.
Instead of copying the sgemm result memory to the result buffer, we free the result buffer with vfree and then just overwrite the buffer address with the sgemm result address.
Overview
Firstly, by the time this code is executed, based on the NDArray_Matmul method, we know that both arrays are on the same device.
Based on the fact that we know that array "a" is on the GPU, we can say that both arrays are on the GPU. Therefore, the preprocessor directive to check for the presence of CUBLAS does not make sense, since placing the array in GPU memory is not possible without the presence of CUBLAS. Therefore this directive has been removed.
Secondly, based on the first point, we know that both arrays are already placed in GPU memory, therefore there is no need to allocate additional memory and copy them. Therefore, the cudaMalloc, cudaMemcpy and cudaFree functions for input arrays have been removed.
The name of the resulting array has been changed from d_C to deviceResult to make the code more clear.
These changes led to an increase in the performance of the matmul operation.
Benchmark before changes:
NDArray
Benchmark after changes:
NDArray
Pytorch (for comparison)
Visualization of the multiplication rate as the number of iterations increases for NDArray and Pytorch.
Note: after the 50th iteration, the speed started to drop for both libraries. These performance changes correlate with the graphics card heating up.