Open JacobHast opened 2 weeks ago
I think this is related to the fact that Fock objects get converted to Bargmann, which might not be the optimal thing to do here (i.e. it's faster to convert the Dgate to fock).
Here's a modified version of the above code, which generates new states every attempt to rule out speed improvements from caching
import mrmustard.lab_dev as mm
import numpy as np
from mrmustard import settings
import timeit
def make_state_and_operator():
state = mm.Ket.from_fock([0], np.random.random(10)).normalize()
operator = mm.Dgate([0], x=1)
return state, operator
def expectation_matmul() -> complex:
state, operator = make_state_and_operator()
state_fock = state.fock(settings.AUTOSHAPE_MAX)
operator_fock = operator.fock(settings.AUTOSHAPE_MAX)
return state_fock.T.conj() @ operator_fock @ state_fock
def expectation_buildin() -> complex:
state, operator = make_state_and_operator()
return state.expectation(operator)
%timeit expectation_matmul()
%timeit expectation_buildin()
%timeit make_state_and_operator()
Here also testing using .to_fock() on the operator. Faster, but still slower than the direct multiplication
import mrmustard.lab_dev as mm
import numpy as np
from mrmustard import settings
import timeit
def make_state_and_operator():
state = mm.Ket.from_fock([0], np.random.random(10)).normalize()
operator = mm.Dgate([0], x=1)
return state, operator
def expectation_matmul() -> complex:
state, operator = make_state_and_operator()
state_fock = state.fock(settings.AUTOSHAPE_MAX)
operator_fock = operator.fock(settings.AUTOSHAPE_MAX)
return state_fock.T.conj() @ operator_fock @ state_fock
def expectation_buildin() -> complex:
state, operator = make_state_and_operator()
return state.expectation(operator)
def expectation_buildin_to_fock() -> complex:
state, operator = make_state_and_operator()
return state.expectation(operator.to_fock())
%timeit expectation_matmul()
%timeit expectation_buildin()
%timeit expectation_buildin_to_fock()
%timeit make_state_and_operator()
Before posting a feature request
Feature details
When calculating displacement-operator expectation-values of pure states in the Fock representation, the current implementation is ~10x slower than extracting the vectors and matrices of the state and operator and multiplying them together manually. So it seems something is limiting the speed of the current implementation.
Implementation
If the current method cannot be sped up, one could use bare matrix multiplication:
How important would you say this feature is?
2: Somewhat important. Needed this quarter.
Additional information
No response