Open Krastanov opened 2 years ago
Here is an attempt at a rather ugly and inefficient solution
function drop_singular_bases(ket)
b = tensor([b for b in basis(ket).bases if length(b)>1]...)
return Ket(b, ket.data)
end
function project_and_drop(state, project_on, basis_index)
singularbasis = GenericBasis(1)
singularket = basisstate(singularbasis,1)
proj = projector(singularket, project_on')
basis_r = collect(Any,basis(state).bases)
basis_l = copy(basis_r)
basis_l[basis_index] = singularbasis
emproj = embed(tensor(basis_l...),tensor(basis_r...),basis_index,proj)
result = emproj*state
return drop_singular_bases(result)
end
Tested here:
julia> embed(sb⊗b2, b1⊗b2, 1, projector(sk, k1')) * (k1⊗k2)^C
julia> k1, k2 = [fockstate(b,i) for (i,b) in enumerate(bases)];^C
julia> b1, b2 = bases = [FockBasis(i) for i in [3,4]];
julia> k1, k2 = [fockstate(b,i) for (i,b) in enumerate(bases)];
julia> project_and_drop(k1⊗k2, k2, 2) == k1
true
julia> project_and_drop(k1⊗k2, k1, 1) == k2
true
Is it important for you to get the state |c⟩
from |b⟩⊗|c⟩
or would it be sufficient to get the density matrix |c⟩⟨c|
?
Because then you could simply use the function ptrace()
.
In principle you could also obtain the state |c⟩
from the density matrix, but it might be annoying to get the right phases.
Sorry, it is also quite easy to get the state from the density matrix. You just need to calculate the eigenstates of ρ and there is only one with corresponding eigenvalue λ=1.
using QuantumOptics
b_fock = FockBasis(2)
b_spin = SpinBasis(1//2)
b = b_fock ⊗ b_spin
ψα = coherentstate(b_fock, 0.1im + 0.15im)
ψ0 = ψα ⊗ spindown(b_spin)
ρ0 = ψ0⊗dagger(ψ0)
ρα = ptrace(ρ0, 2)
function get_ψ_from_ρ(ρ)
λs, ψs = eigenstates(ρ)
it = findfirst(isone, round.(λs))
return ψs[it]
end
println(ψα)
println(get_ψ_from_ρ(ρα))
Note that you get some inaccuracies due to the numerics.
@ChristophHotter , using a density matrix for an otherwise vector-like operation is slow, especially given that you are adding matrix diagonalization to it. 500x slower and 100x more allocated memory in the following example.
bases = [FockBasis(i) for i in 3:6]
states = [fockstate(b,i) for (i,b) in enumerate(bases)]
bigstate = tensor(states...)
julia> @benchmark project_and_drop(bigstate, states[1], 1)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 19.226 μs … 4.224 ms ┊ GC (min … max): 0.00% … 95.23%
Time (median): 20.669 μs ┊ GC (median): 0.00%
Time (mean ± σ): 23.646 μs ± 87.574 μs ┊ GC (mean ± σ): 7.99% ± 2.15%
▄▆▇██▇▆▅▄▃▃▂▂▂▂▁▁▁ ▁▁▁▁▁▁▁▁▂▁▁▁▁ ▂
▇█████████████████████████████████████▇█▇▇▆▆▇▅▅▅▅▅▄▅▄▄▄▄▅▅▅ █
19.2 μs Histogram: log(frequency) by time 34.3 μs <
Memory estimate: 32.56 KiB, allocs estimate: 109.
julia> @benchmark get_ψ_from_ρ(ptrace(bigstate, 1))
BenchmarkTools.Trial: 536 samples with 1 evaluation.
Range (min … max): 8.078 ms … 15.723 ms ┊ GC (min … max): 0.00% … 12.13%
Time (median): 8.870 ms ┊ GC (median): 0.00%
Time (mean ± σ): 9.316 ms ± 1.203 ms ┊ GC (mean ± σ): 1.37% ± 4.44%
▃▃▁▂▂▁▅█▆▄▁▂▂▁▂▁ ▁
█████████████████▇▇▁▇▅▄▁▇▅▁▆▅▅▆▅██▆▆▅▆▅▁▁▁▄▁▄▁▄██▄▁▁▁▄▁▁▁▄ ▇
8.08 ms Histogram: log(frequency) by time 13.7 ms <
Memory estimate: 3.61 MiB, allocs estimate: 879.
Yes, you are completely right. Using the density matrix doesn't make much sense.
However, I think there is an easy solution for your problem.
Product states are created with the kron()
function, for the two states |a⟩ = [a1, a2]
and |b⟩ = [b1, b2]
, we get |a⟩⊗|b⟩ = [a1*b1, a2*b1, a1*b2, a2*b2]
. To obtain |a⟩
from this product state we just need the first two elements of the vector ([a1*b1, a2*b1]
) and normalize it. To get |b⟩
we need the elements 1 and 3 ([a1*b1, a1*b2]
) and normalize it.
Normalizing basically just divides out the common factor (b1
in the vector for |a⟩
and a1
in the vector for |b⟩
).
This means we just need to find the right elements of the state.
I wrote a function productstate_kets()
that gives you all Ket
's from a product state:
using QuantumOptics
b1, b2 = [FockBasis(i) for i in [2,3]]
b3 = SpinBasis(1//2)
b = tensor(b1, b2, b3)
c1 = coherentstate(b1, 0.3 + 0.2im)
c2 = coherentstate(b2, 0.4 + 0.1im)
s1 = spinup(b3)
k = c1⊗c2⊗s1
function productstate_kets(k::Ket)
basis = k.basis
isa(basis, CompositeBasis) || return k
bases = basis.bases
kets = Ket[]
kets = push!(kets, normalize(Ket(bases[1], k.data[1:length(bases[1])])))
for it_b = 2:length(bases)
step = prod(length(bases[i]) for i=1:it_b-1)
indices = [1+step*i for i=0:length(bases[it_b])-1]
ket_it = normalize(Ket(bases[it_b], k.data[indices]))
push!(kets, ket_it)
end
return kets
end
productstate_kets(k)
productstate_kets(c2)
Is this what you need?
This is similar to what I am trying to do, thanks! However, I want to remove only one of the subsystem, not split all the subsystems. Moreover, the assumption that the state is factorizable is not something I can afford. It becomes factorizable only after the projection on ⟨a|
.
I think you are doing |b⟩⊗|c⟩ → tuple |b⟩, |c⟩
I want to do ⟨a|⊗Id . |b⟩⊗|c⟩ → ⟨a|b⟩ |c⟩
. I want to project the b subsystem. Actually, c might not even be a single subsystem, rather be a non-factorizable state of multiple systems |c⟩=|d1⟩⊗|e1⟩+|d2⟩⊗|e2⟩
.
Even worse, I would need to perform this for explicitly non factorizable |BC⟩ = |b1⟩⊗|c1⟩+|b2⟩⊗|c2⟩
. What is a good way now to obtain ⟨a|⊗Id . |BC⟩ → ⟨a|b1⟩ |c1⟩ + ⟨a|b2⟩ |c2⟩
?
It would be awesome if I can specify ⟨a|
without having to create a matrix projector for it as I have done in my first solution above. That way the operations would be as fast as it gets.
This type of subsystem projections is pretty common in (memory-efficient) Monte Carlo simulations of small quantum computing devices. E.g. we have 5 transmons, all of them entangled in a non-factorizable state, and we want to measure only one of them. Postselecting on the measurement result would give us 4-qubit kets. For qubits, that is a 2x memory savings, but even if we did not care about the memory savings, the speadup compared to creating (even sparse) projector matrices would be significant.
At this point, I should also ask whether you would accept a PR providing a function called partial_project
or something similar, which is the "postselection"/"ket" equivalent of partial trace, as discussed in the paragraph above?
Just wanted to explain this in one more way: If a partial trace is ptrace(ρ) = ∑ᵢ ⟨bᵢ|ρ|bᵢ⟩
and ptrace(|ψ⟩) = ∑ᵢ ⟨bᵢ|ψ⟩⟨ψ|bᵢ⟩
, then I simply want access to ⟨bᵢ|ρ|bᵢ⟩
and ⟨bᵢ|ψ⟩
for each i (where I specify the base vectors in which the trace is computed).
I think a good solution would be to implement tensor products of kets with operators, and also bras with operators, such that you can create ⟨a|⊗Id
. But I am not sure if this could bring some problems?
@david-pl do you think this is a bad idea?
@Krastanov Would this solve your issue?
@ChristophHotter I'm not sure if that actually solves the problem. Also, I think this is basically the same as creating a 1D basis as @Krastanov did in his initial example. The proj
variable that is created inside the project_and_drop
function is a 1xN
"matrix" (so really just the Bra
), which allows building up tensor products anyway.
@Krastanov If you use LazyTensor
s inside your project_and_drop
function instead of embed
you only really have to store this 1xN
projector, so basically just your state. I guess that would be as memory efficient as it gets. If the state you project on doesn't change you might want to separate the functions so you can create the projector just once and apply the drop_singular
function separately, which should be much cheaper.
Here's a slightly optimized version of your code:
function project_and_drop(state, project_on, basis_index)
singularbasis = GenericBasis(1)
singularket = basisstate(singularbasis,1)
proj = Operator(singularbasis, project_on.basis, project_on.data')
bases = state.basis.bases
basis_l = tensor((i==basis_index ? singularbasis : bases[i] for i=1:length(bases))...)
emproj = LazyTensor(basis_l, state.basis, basis_index, proj, 1.0)
result = emproj*state
return drop_singular_bases(result)
end
@david-pl , thanks, this will be very useful (and I should probably look at the LazyTensor source code).
@ChristophHotter , the general tensor object you suggest might be too much for QuantumOptics, but it would be certainly interesting if such helper functions exist for mixing QuantumOptics and the various tensor network libraries that are popping up.
It would be nice to include a function like this in QO.jl. Perhaps call it partial_project()
?
I have some very simple code to handle tensor products of bras and kets with operators, for example:
function tensor(a::AbstractOperator, b::Bra)
# upgrade the bra to an operator that projects onto a dim-1 space
b_op =Operator(GenericBasis(1), basis(b), reshape(b.data, (1,:)))
ab_op = tensor(a, b_op)
# squeeze out the trivial dimension
Operator(a.basis_l, ab_op.basis_r, ab_op.data)
end
This allows us to do things like identity_operator(b1) ⊗ basisstate(b2)
, the result being an operator with left basis b1
and right basis b1 ⊗ b2
. Happy to submit a PR.
The SVD of the coefficient matrix (Schmidt decomposition), obtained by reshaping the state vector, will directly give us both the ket vectors corresponding to the singular value 1. This would not generate a density matrix. As we have only one nonzero singular value, there may be a more efficient way of doing this.
State-operator tensor products now supported in QuantumOpticsBase: https://github.com/qojulia/QuantumOpticsBase.jl/pull/61
This is great! I would suggest keeping this issue still open though as the state-operator tensor product is not trivially supported by embed
which would be needed to solve this issue completely.
Imagine I have a vector
|b⟩⊗|c⟩
that I know for a fact is factorizable like this. This can arise in a situation in which I have just projected by|b⟩⟨b|⊗Id
. How do I now extract|c⟩
? How do I drop the dimension corresponding to|b⟩
.One way to do that would be to have a
SingularBasis
of dimension 1 so I can doproject(b, basisstate(SingularBasis(),0))
and then somedropsingular
function to remove subsystems of size 1.Or maybe there is something in
embed
that already does this.Any suggestions how to do it or whether it is already implemented?