Hi developer, I want to use GPU to accelerate matrix operations, but i meet some problem and hope to get your help.
I did a test to verify that GPU matrix speedup worked.
My code looks like this:
// cpu mutiply
Mat mat1 = new Mat(1080, 720, CV_8UC1, new Scalar(191));
Mat mat2 = new Mat(1080, 720, CV_8UC1, new Scalar(191));
Mat result = new Mat();
long start = System.currentTimeMillis();
for(int i = 0 ; i < 1000; i ++){
opencv_core.multiply(mat1,mat2,result);
}
long timeConsume = System.currentTimeMillis() - start;
log.info("cpu multiply time consume:{}", timeConsume);
// gpu multiply
UMat uMat1 = new UMat(1080, 720, CV_8UC1, new Scalar(191));
UMat uMat2 = new UMat(1080, 720, CV_8UC1, new Scalar(191));
UMat uResult = new UMat();
long uStart = System.currentTimeMillis();
for(int i = 0 ; i < 1000; i ++){
opencv_core.multiply(uMat1,uMat2,uResult);
}
long uTimeConsume = System.currentTimeMillis() - uStart;
log.info("cpu multiply time consume:{}", uTimeConsume);
// close ...
I tried doing 1000 multiplications with Mat and UMat to demonstrate the ability of GPUs to accelerate matrix operations.
On Windows, UMat will be 2-3 times faster than Mat. But on Linux, it doesn't look like GPU are working, and the speed of the two is almost the same.
The configuration of my GPU on Linux is like this:
My maven dependencies look like this:
Hi developer, I want to use GPU to accelerate matrix operations, but i meet some problem and hope to get your help. I did a test to verify that GPU matrix speedup worked. My code looks like this:
I tried doing 1000 multiplications with Mat and UMat to demonstrate the ability of GPUs to accelerate matrix operations. On Windows, UMat will be 2-3 times faster than Mat. But on Linux, it doesn't look like GPU are working, and the speed of the two is almost the same. The configuration of my GPU on Linux is like this:
My maven dependencies look like this:
My docker container is centos 8. Did I use it or understand something wrong? Looking forward to your reply!