Open JacobHast opened 3 months ago
This also directly affects the speed of .normalize
Code also testing .to_fock()
before .probability
- doesn't improve things
import mrmustard.lab_dev as mm
import numpy as np
from mrmustard import settings
import timeit
settings.AUTOSHAPE_MAX = 1000
def make_state() -> mm.Ket:
state = mm.Ket.from_fock([0], np.random.random(settings.AUTOSHAPE_MAX))
return state
STATE = make_state()
def probability_np() -> float:
state_fock = STATE.fock(settings.AUTOSHAPE_MAX)
return np.sum(np.abs(state_fock) ** 2)
def probability_buildin() -> float:
return STATE.probability
def probability_buildin_to_fock() -> float:
return STATE.to_fock().probability
%timeit probability_np() # 22.4 µs
%timeit probability_buildin() # 218 µs
%timeit probability_buildin_to_fock() # 371 µs
assert np.isclose(probability_np(), probability_buildin())
assert np.isclose(probability_np(), probability_buildin())
Before posting a feature request
Feature details
The calculation of probabilities in lab_dev for states in the Fock representation appears relatively slow - at least it can be sped up by a factor 10 using a simple numpy code as shown in this example.
Implementation
No response
How important would you say this feature is?
2: Somewhat important. Needed this quarter.
Additional information
While the calculation of the probability of a single state is quite fast, I sometimes need to do it for >100,000 states, at which point this step becomes the numerical bottleneck, so it would be very useful to get this improved. I have not done the comparison for DM states, only Ket