Here we mutate b in both arguments. I'm not sure this is valid (actually, as an exercise I tried porting this code to rust and found the borrow checker complaining - nice!). Testing OT!(D, b, b) which calls the same kind of line against Convex.jl, I find that for the KL divergence it seems OK (I get the same results as Convex up to tolerance), but for the other divergences it is not OK and I get different results. I tested this by adding a same pathway to the tests against Convex:
Test Summary: | Pass Fail Total Time
UnbalancedOptimalTransport.jl | 143 12 155 5.3s
regularized OT! with balancing via KL{1.0}(), ϵ=0.1, same=true | 2 2 0.1s
regularized OT! with balancing via KL{1.0}(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via KL{1.0}(), ϵ=1.0, same=true | 2 2 0.0s
regularized OT! with balancing via KL{1.0}(), ϵ=1.0, same=false | 2 2 0.0s
regularized OT! with balancing via KL{2.0}(), ϵ=0.1, same=true | 2 2 0.1s
regularized OT! with balancing via KL{2.0}(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via KL{2.0}(), ϵ=1.0, same=true | 2 2 0.0s
regularized OT! with balancing via KL{2.0}(), ϵ=1.0, same=false | 2 2 0.0s
regularized OT! with balancing via KL{100.0}(), ϵ=0.1, same=true | 2 2 0.1s
regularized OT! with balancing via KL{100.0}(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via KL{100.0}(), ϵ=1.0, same=true | 2 2 0.0s
regularized OT! with balancing via KL{100.0}(), ϵ=1.0, same=false | 2 2 0.0s
regularized OT! with balancing via TV{4.0}(), ϵ=0.1, same=true | 2 2 0.1s
regularized OT! with balancing via TV{4.0}(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via TV{4.0}(), ϵ=1.0, same=true | 2 2 0.4s
regularized OT! with balancing via TV{4.0}(), ϵ=1.0, same=false | 2 2 0.1s
regularized OT! with balancing via RG{0.5, 1.5}(), ϵ=0.1, same=true | 2 2 0.2s
regularized OT! with balancing via RG{0.5, 1.5}(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via RG{0.5, 1.5}(), ϵ=1.0, same=true | 2 2 0.1s
regularized OT! with balancing via RG{0.5, 1.5}(), ϵ=1.0, same=false | 2 2 0.0s
regularized OT! with balancing via Balanced(), ϵ=0.1, same=true | 2 2 0.1s
regularized OT! with balancing via Balanced(), ϵ=0.1, same=false | 2 2 0.0s
regularized OT! with balancing via Balanced(), ϵ=1.0, same=true | 2 2 0.1s
regularized OT! with balancing via Balanced(), ϵ=1.0, same=false | 2 2 0.0s
Here same=true means b=a (aliasing).
I think this is possibly related to the fact that the optimal dual potentials are only guaranteed to be the same when phistar is strictly convex (Assumption 1 in the paper):
Assumption 1 does not hold for RG (as shown in the paper) and there there are multiple dual potentials possible (p. 19 of v1).
We can fix this by detecting the aliasing and making copies when necessary. I will put up a PR.
https://github.com/ericphanson/UnbalancedOptimalTransport.jl/blob/2573b06fd4ec28c0e219ab4585443060450bf1e0/src/optimized_methods.jl#L27
Here we mutate
b
in both arguments. I'm not sure this is valid (actually, as an exercise I tried porting this code to rust and found the borrow checker complaining - nice!). TestingOT!(D, b, b)
which calls the same kind of line against Convex.jl, I find that for the KL divergence it seems OK (I get the same results as Convex up to tolerance), but for the other divergences it is not OK and I get different results. I tested this by adding asame
pathway to the tests against Convex:Here
same=true
meansb=a
(aliasing).I think this is possibly related to the fact that the optimal dual potentials are only guaranteed to be the same when
phistar
is strictly convex (Assumption 1 in the paper):Assumption 1 does not hold for RG (as shown in the paper) and there there are multiple dual potentials possible (p. 19 of v1).
We can fix this by detecting the aliasing and making copies when necessary. I will put up a PR.