This bug is associated when executing block_variable._checkpoint = None. Specifically, when block_variable._checkpoint is delegated to another checkpoint as done in the Cofunction assignment. Consequently, when block_variable._checkpoint is set to None, the information about the delegated checkpoint is lost, causing an error in the results when we recompute the functional.
Reproducing the error:
from firedrake import *
from firedrake.adjoint import *
from checkpoint_schedules import Revolve
import numpy as np
tape = get_working_tape()
tape.enable_checkpointing(Revolve(5, 1))
mesh = UnitSquareMesh(1, 1)
V = FunctionSpace(mesh, "R", 0)
v = TestFunction(V)
c = Constant(1.0)
b = c * v * dx
u1 = Cofunction(V.dual(), name="u1")
sol = Function(V, name="sol")
u = TrialFunction(V)
u0 = assemble(b)
J = 0
for i in tape.timestepper(iter(range(5))):
u1.assign(i * u0)
solve(u * v * dx == u1, sol)
J += assemble(sol * sol * dx)
J_hat = ReducedFunctional(J, Control(c))
print(J_hat(c), J)
assert np.isclose(J_hat(c), J)
A potential solution Path:
To sort out this issue, incorporating an OverloadedType clear checkpoint can be an alternative to clear the checkpoint without losing the delegated checkpointing information. On the other hand, If we decide that this issue must be solved only in Firedrake, we can move it to Firedrake issues.
This bug is associated when executing
block_variable._checkpoint = None
. Specifically, whenblock_variable._checkpoint
is delegated to another checkpoint as done in theCofunction
assignment. Consequently, whenblock_variable._checkpoint
is set toNone
, the information about the delegated checkpoint is lost, causing an error in the results when we recompute the functional.To sort out this issue, incorporating an
OverloadedType
clear checkpoint can be an alternative to clear the checkpoint without losing the delegated checkpointing information. On the other hand, If we decide that this issue must be solved only in Firedrake, we can move it to Firedrake issues.