diprism / fggs

Factor Graph Grammars in Python
MIT License
13 stars 3 forks source link

Suspicious unification warning #178

Closed ccshan closed 10 months ago

ccshan commented 10 months ago
$ cat perpl/tests/good/discard_prods.ppl
define id = \a. a;

-- affine-to-linear should give:
-- define f = \y. let p = (<id, ()>, <id, ()>) in let (_x0, _x1) = p in let <_, z> = _x1 in let <_, z> = _x0 in y
define f = \y. let p = (id, id) in y;
-- and this should not let _x0 be captured
define g = \_x0. let p = (id, id) in _x0;

-- affine-to-linear should give:
-- define f = \y. let p = <<id, ()>, ()> in let <_, x> = p in y
define h = \y. let p = <id> in y;
-- and this should not let x be captured
define i = \x. let p = <id> in x;

(f (), g (), h True, i True)

-- correct: [0,0,0,1]

$ perpl/perplc perpl/tests/good/discard_prods.ppl | fggs/bin/sum_product.py -d -G /dev/stdin
[0.0, 0.0, 0.0, 1.0]
/home/ccshan/u/rational/fmitf/fggs/fggs/indices.py:256: UserWarning: Attempt to unify () and (0 + A(0) + 1) indicates index type mismatch
/home/ccshan/u/rational/fmitf/fggs/bin/sum_product.py:120: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
chihyang commented 10 months ago

I pushed a new commit 92e550bcb07cc0d297d81fe8b43de13b36781540 to use v.weights directly when printing gradients.