Closed SilverPoker closed 1 year ago
@SilverPoker You could run it with more detailed logging enabled that might give hints whether the GPUs are actually used or not. Just set the environment variable RUST_LOG=debug
(or even RUST_LOG=trace
for even more logging).
@vmx Thanks, I have tried but I don't see any extra output after setting RUST_LOG=debug
.
My usage:
change./mimc-ff046c5aaaac0b68
directly to RUST_LOG=debug ./mimc-ff046c5aaaac0b68
Is that correct? Do I need some extra operations/commands for using debug info.
Also I am sure gpu has already been used since I have used nvidia-smi while running the binary file, the gpu utilization is pretty low though.
@vmx Now I am pretty sure it has already used the GPU.....But the performance is worse than multi-core CPU
2020-01-30T05:51:36Z INFO bellperson::multiexp] GPU Multiexp kernel instantiated!
[2020-01-30T05:51:38Z INFO bellperson::gpu::utils] GPU lock file released
[2020-01-30T05:51:39Z INFO bellperson::groth16::prover] Bellperson 0.5.3 is being used!
[2020-01-30T05:51:39Z INFO bellperson::gpu::utils] Creating GPU lock file
[2020-01-30T05:51:39Z INFO bellperson::gpu::utils] GPU lock file acquired
[2020-01-30T05:51:39Z INFO bellperson::gpu::fft] FFT: 1 working device(s) selected.
[2020-01-30T05:51:39Z INFO bellperson::gpu::fft] FFT: Device 0: GeForce GTX 1080 Ti
[2020-01-30T05:51:39Z INFO bellperson::domain] GPU FFT kernel instantiated!
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: 8 working device(s) selected. (CPU utilization: 0)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 0: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 1: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 2: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 3: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 4: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 5: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 6: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::gpu::multiexp] Multiexp: Device 7: GeForce GTX 1080 Ti (Chunk-size: 6167411)
[2020-01-30T05:51:41Z INFO bellperson::multiexp] GPU Multiexp kernel instantiated!
[2020-01-30T05:51:43Z INFO bellperson::gpu::utils] GPU lock file released
We now even have experimental support for CUDA and also other things improved. I'd expect GPU being faster then multi-core CPU these days. If it turns out to still be an issue, feel free to re-open this bug report again with additional details.
Hi, I am running this library for exploring how fast is the GPU implementation. Unfortunately, I found GPU is 10 times slower than CPU and I want to know why. My CPU: Intel(R) Xeon(R) Gold 6145 CPU @ 2.00GHz, 80 cores, memory is 377G My GPU: I have eight GTX 1080 on my server.
I compile the code (one with gpu feature and one without that) and run the compiled binary output of mimc, I get the following result:
Run on GPU (enable gpu feature): Creating parameters... Creating proofs... test test_mimc ... test test_mimc has been running for over 60 seconds Average proving time: 4.691798915s Average verifying time: 0.164126917s Batch verification of 50 proofs: 0.074657728s (0.00149316052s/proof)
Run on CPU: Creating parameters... Creating proofs... Average proving time: 0.540332641s Average verifying time: 0.194030763s Batch verification of 50 proofs: 0.184567745s (0.00369135856s/proof)
Is that reasonable? If that's correct and reasonable, I think the reason is my CPU has too many cores? Or it's totally unreasonable at all....