unitaryfund / qrack

Comprehensive, GPU accelerated framework for developing universal virtual quantum processors
https://qrack.readthedocs.io/en/latest/
GNU Lesser General Public License v3.0
176 stars 38 forks source link

Update benchmarks #447

Closed WrathfulSpatula closed 3 years ago

WrathfulSpatula commented 4 years ago

When we received a grant from the Unitary Fund, our stated intention was to complete a set of benchmarks that was generally meaningful across quantum computer simulators, native hardware, and practical high performance computing. We made a lot of progress, but several things put us off-track:

All three of these issues have settled: I've begun to focus on external integrations rather than internal performance for Qrack, we fixed the Sycamore circuit bug, and I have settled into a new normal, per COVID-19, and I have to the capacity to finish a round set of benchmarks.

I apologize for the delay in updating benchmarks, recently, but at least we've made a ton of progress on the framework, in the meantime. First priority for Qrack, for the next few weeks, is completing our planned program of benchmarks. (I can run these passively while I tinker with the Unity3D and Q# integrations via the DLL, without fear of invalidating the benchmarks.)

WrathfulSpatula commented 3 years ago

On 2/13/21, it's time to finalize a good round of benchmarks. We had previously used the AWS g3s.xlarge for Qrack benchmarks, but I think this is not any longer our most cost-efficient option. QUnit and "hybridization" between CPU, GPU, and "stabilizer" formalism make good use of at least a modestly sized CPU alongside a GPU, and the g3s is too small of a CPU for its GPU, for our purposes. (In terms of cost efficiency, while the g3s did previously seem to be the lowest cost-to-throughput option we had on AWS despite this, it also comes at a premium relative the cost/performance trend line of the g3 series.) We probably want a single GPU VM, but one with any larger CPU than the g3s, and I'm pricing out options now.

WrathfulSpatula commented 3 years ago

The g4dn series looks like the way to go. AWS advertises it as the lowest cost set of options for small scale training, (basically single GPU, in our case). We have some credits "to burn," so I'll take the opportunity to experiment with multi-accelerator and maybe even bare metal in the broader g4 series, as well, right now.

WrathfulSpatula commented 3 years ago

We have since updated the benchmarks, in the documentation. (Since that update, #697 hopefully even improves mid-range performance, but with the caveat that this depends on total QUnit qubit width, rather than sub-unit widths.)