Open rlouf opened 2 years ago
Is your etuplize_random
working as expected? I don't see a MakeRVNodeOp
in the printed string output.
Yes, if you print norm_et
defined above you get:
norm_et
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F65BBBE9A80>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
i.e. etuplize(at.random.normal)
does wrap the RandomVariable pp with MakeRVNode
. The problem is that MakeRVNodeOp
does not appear in Y_et
.
My understanding is that while we dispatch etuplize
, it is the inner function etuplize_step
that is applied recursively (see here) and this one is not dispatched. Is that expected?
The only way is through dispatching _car
the way you did above. Still, it would be nice if the code above with etuplize_random
just worked.
The following works:
from dataclasses import dataclass
from aesara.graph import Op, Variable
import aesara.tensor as at
from aesara.tensor.random.basic import RandomVariable
from cons.core import ConsError, _car, _cdr
from etuples import etuplize, etuple
@dataclass
class MakeRVNodeOp:
op: Op
def __call__(self, *args):
return self.op.make_node(*args)
@etuplize.register(RandomVariable)
def etuplize_random(*args, **kwargs):
return etuple(MakeRVNodeOp, etuplize.funcs[(object,)](*args, **kwargs))
def car_RVariable(x):
if x.owner:
return MakeRVNodeOp(x.owner.op)
else:
raise ConsError("Not a cons pair.")
_car.add((Variable,), car_RVariable)
def car_MakeNodeOp(x):
return type(x)
_car.add((MakeRVNodeOp,), car_MakeNodeOp)
def cdr_MakeNodeOp(x):
x_e = etuple(_car(x), x.op, evaled_obj=x)
return x_e[1:]
_cdr.add((MakeRVNodeOp,), cdr_MakeNodeOp)
srng = at.random.RandomStream(0)
Y_rv = srng.normal(1, 1)
Y_et = etuplize(Y_rv)
print(Y_et)
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F07F31FD2A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
norm_et = etuple(etuplize(at.random.normal), *Y_et[1:])
print(norm_et)
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F07F31FD2A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
which does feel a bit hacky; ideally dispatching etuplize
would be enough to get the expected behavior. The current behavior, without knowing the internals of etuplize
, is actually confusing.
Finally, just making sure this solves the motivating issue (https://github.com/aesara-devs/aemcmc/pull/29) with running kanren
goals forward and backward:
from kanren import eq, lall, run, var
def normaleo(in_expr, out_expr):
mu_lv, std_lv = var(), var()
rng_lv, size_lv, dtype_lv = var(), var(), var()
norm_in_lv = etuple(etuplize(at.random.normal), rng_lv, size_lv, dtype_lv, mu_lv, std_lv)
norm_out_lv = etuple(etuplize(at.random.normal), rng_lv, size_lv, dtype_lv, mu_lv, std_lv)
return lall(eq(in_expr, norm_in_lv), eq(out_expr, norm_out_lv))
srng = at.random.RandomStream(0)
Y_rv = srng.normal(1, 1)
Y_et = etuplize(Y_rv)
y_lv = var()
(y_expr,) = run(1, y_lv, normaleo(Y_et, y_lv))
y_expr.evaled_obj
# normal_rv{0, (0, 0), floatX, False}(RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7FC8E9B784A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
y_lv_rev = var()
(y_expr_rev,) = run(1, y_lv_rev, normaleo(y_lv_rev, Y_et))
y_expr_rev.evaled_obj
# normal_rv{0, (0, 0), floatX, False}(RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7FC8E9B784A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
eval_if_etuple
does not work when naively evaluating etuplized random variables. Let us consider the following model:The original object is stored in the
ExpressionTuple
instance, so the following works:We can force the
ExpressionTuple
to re-evaluate instead of returning the saved object; in this case evaluation fails:The issue was discussed in https://github.com/aesara-devs/aemcmc/pull/29 and prevents us to write miniKanren goals that represent two-way relations between expressions that contain random variables:
The reason, explained in this comment, is that the
evaled_obj
method performsop.__call__
when evaluating. However,__call__
does not dispatch tomake_node
systematically for =RandomVariable=s, hence the error.Instead of dispatching
op.__call__
toop.make_node
for every random variable (which would be restrictive according to @brandonwillard), we can wrapRandomVariable
Ops with aMakeNodeOp
Op that dispatches__call__
tomake_node
:The downside to this approach is that defining
kanren
goals becomes more cumbersome as we need to wrap random variables withMakeNodeOp
like:I suggest instead to dispatch
etuplize
forRandomVariable
, which handles both the etuplization of random variables and ofRandomVariables
ops:And now the above
kanren
goal works both ways:Note that the following code will still fail, however:
So we wil need a mix of the two approches. I will open a PR soon.