H5 will by default convert everything to a numpy array before writing. It may speed things up to use numpy arrays from the outset. This would require changes to (qick_tprocv2_experiments_mux/round_robin_benchmark.py) in the data container dictionary definitions: replace [None]*save_r with np.empty(save_r) probably using the dtype argument if it gives you trouble without it. (note that currently you're redefining this method every time the loop iterates) Could also do numpy.array(['None']*6)
Furthermore, I might consider writing a generator method that you pass a list of keys (strings), save_r, and num_Qs (number of qubits to make space for). Then return a dictionary built from that - it will help clean up the repeated code in the preamble
Line 57-118: This isn't bad from an optimization standpoint, but isn't robust to new data classes. One might consider pulling the keys from the dictionary directly then iterating over them to get the data
Currently you're saving to a new h5 file every save_r loop iterations. One should test if its faster to add a new data group to an exisitng h5 file instead of creating a new one. I suspect the way you're doing it is actually faster, but worth a check.
It may be worth having a method (or add it to an existing one) that resets the dictionary to have Nones again, so that there's no chance old data is propagated forward by bugs elsewhere
qick_tprocv2_experiments_mux/section_008_save_data_to_h5.py
Optimizing saving data to h5 file:
[None]*save_r
withnp.empty(save_r)
probably using thedtype
argument if it gives you trouble without it. (note that currently you're redefining this method every time the loop iterates) Could also donumpy.array(['None']*6)
save_r
, andnum_Qs
(number of qubits to make space for). Then return a dictionary built from that - it will help clean up the repeated code in the preambleget
the datasave_r
loop iterations. One should test if its faster to add a new data group to an exisitng h5 file instead of creating a new one. I suspect the way you're doing it is actually faster, but worth a check.None
s again, so that there's no chance old data is propagated forward by bugs elsewhere