Open danielleodigie opened 2 years ago
This is what I get when I run the results from run_bv.py with fidelity:
This is the data from the json I used data_table.txt
I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call:
pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store
The json file has the fidelities of each individual benchmark, so the code you had prior to the adding dual file comparison
commit, struggled to visual multiple benchmark names.
Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.
I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call:
pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store
The json file has the fidelities of each individual benchmark, so the code you had prior to theadding dual file comparison
commit, struggled to visual multiple benchmark names.Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.
ahh, okay I didn't know you could run multiple benchmarks at once, I didn't write this with that in mind, and your problem is that the fidelity breaks down when you have multiple different benchmarks in one json file?
It's more so that the names of the benchmarks aren't displayed in a legible manor I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.
It's more so that the names of the benchmarks aren't displayed in a legible manor I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.
oh it's just how it looks, I rotated the names as such, to see the bottom you may have to go into the subplot config thingy but that's alright
This pull request adds bar graph visualization for the applications the QxQ interns are working on. I added to the README, but to run it, after getting your results JSON, run:
For example:
Currently, I only have
fidelity
andmeantime
as available data types, but I'm open to adding more, I just wasn't sure which ones would be useful. If you have any suggestions, it would help a ton! :)Also, I noticed it didn't work as well with the mapping benchmarks, so if you can help with that, that would be awesome