dolfin-adjoint / pyadjoint

The algorithmic differentation tool pyadjoint and add-ons.
GNU Lesser General Public License v3.0
91 stars 37 forks source link

Wrong recomputation results with delegated checkpoints #144

Closed Ig-dolci closed 3 months ago

Ig-dolci commented 5 months ago

This bug is associated when executing block_variable._checkpoint = None. Specifically, when block_variable._checkpoint is delegated to another checkpoint as done in the Cofunction assignment. Consequently, when block_variable._checkpoint is set to None, the information about the delegated checkpoint is lost, causing an error in the results when we recompute the functional.

from firedrake import *
from firedrake.adjoint import *
from checkpoint_schedules import Revolve
import numpy as np

tape = get_working_tape()
tape.enable_checkpointing(Revolve(5, 1))

mesh = UnitSquareMesh(1, 1)
V = FunctionSpace(mesh, "R", 0)
v = TestFunction(V)
c = Constant(1.0)
b = c * v * dx
u1 = Cofunction(V.dual(), name="u1")
sol = Function(V, name="sol")
u = TrialFunction(V)
u0 = assemble(b)
J = 0
for i in tape.timestepper(iter(range(5))):
    u1.assign(i * u0)
    solve(u * v * dx == u1, sol)
    J += assemble(sol * sol * dx)

J_hat = ReducedFunctional(J, Control(c))
print(J_hat(c), J)
assert np.isclose(J_hat(c), J)

To sort out this issue, incorporating an OverloadedType clear checkpoint can be an alternative to clear the checkpoint without losing the delegated checkpointing information. On the other hand, If we decide that this issue must be solved only in Firedrake, we can move it to Firedrake issues.

Ig-dolci commented 3 months ago

This is not a pyadjoint issue. It will be sorted out upon merging PR 3669.