What this PR does is to use the make_from_op defined in Value.
I think the classmethod is a better implementation than the staticmethod since it takes care of requires_grad. If we are computing only non-require-grad tensors, they can be safely detached under the eager mode without tracing the graph.
demo
In [1]: import sys
In [2]: sys.path.append('./python/')
In [3]: import needle as ndl
In [4]: a = ndl.Tensor([1.0], requires_grad=False)
In [5]: b = ndl.Tensor([1.0], requires_grad=False)
In [6]: c = a * b
In [7]: c.is_leaf()
Out[7]: True
In [8]: c.inputs
Out[8]: []
In [9]: c
Out[9]: needle.Tensor([1.])
What this PR does is to use the
make_from_op
defined inValue
.I think the classmethod is a better implementation than the staticmethod since it takes care of
requires_grad
. If we are computing only non-require-grad tensors, they can be safely detached under the eager mode without tracing the graph.demo