Qiskit / red-queen

Quantum software benchmarking tool
Apache License 2.0
18 stars 15 forks source link

Adding Bar Graph Visualization for games/applications #24

Open danielleodigie opened 2 years ago

danielleodigie commented 2 years ago

This pull request adds bar graph visualization for the applications the QxQ interns are working on. I added to the README, but to run it, after getting your results JSON, run:

python visualization/view.py <JSON FILE> <DATA TYPE>

For example:

python visualization/view.py results/0001_bench.json fidelity

Currently, I only have fidelity and meantime as available data types, but I'm open to adding more, I just wasn't sure which ones would be useful. If you have any suggestions, it would help a ton! :)

Also, I noticed it didn't work as well with the mapping benchmarks, so if you can help with that, that would be awesome

danielleodigie commented 2 years ago

This is what I get when I run the results from run_bv.py with fidelity:

image

Lementknight commented 2 years ago

This is the data from the json I used data_table.txt

Lementknight commented 2 years ago

I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call: pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store The json file has the fidelities of each individual benchmark, so the code you had prior to the adding dual file comparison commit, struggled to visual multiple benchmark names.

Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.

danielleodigie commented 2 years ago

I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call: pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store The json file has the fidelities of each individual benchmark, so the code you had prior to the adding dual file comparison commit, struggled to visual multiple benchmark names.

Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.

ahh, okay I didn't know you could run multiple benchmarks at once, I didn't write this with that in mind, and your problem is that the fidelity breaks down when you have multiple different benchmarks in one json file?

Lementknight commented 2 years ago

It's more so that the names of the benchmarks aren't displayed in a legible manor Figure_1 I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.

danielleodigie commented 2 years ago

It's more so that the names of the benchmarks aren't displayed in a legible manor Figure_1 I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.

oh it's just how it looks, I rotated the names as such, to see the bottom you may have to go into the subplot config thingy but that's alright image