Open brandonwillard opened 3 years ago
@brandonwillard I assume this issue is still open, but with the following two files? The original link is broken. If so, I can submit a PR for this.
$ rg broadcast_like
tensor/math_opt.py
45: broadcast_like,
1810: new_out = broadcast_like(new_out, out, fgraph)
1835: return [broadcast_like(1, node.outputs[0], fgraph)]
1837: return [broadcast_like(node.inputs[0], node.outputs[0], fgraph)]
1881: ret = broadcast_like(0, node.outputs[0], fgraph)
2033: return [broadcast_like(0, node.outputs[0], fgraph)]
2058: return [broadcast_like(rval, node.outputs[0], fgraph)]
2063: return [broadcast_like(-1, node.outputs[0], fgraph)]
2065: return [broadcast_like(1, node.outputs[0], fgraph)]
tensor/basic_opt.py
170:def broadcast_like(value, template, fgraph, dtype=None):
182: "broadcast_like currently requires the "
1753: o = broadcast_like(v, r, fgraph, dtype=v.dtype)
Yes, theano.tensor.opt
was split into theano.tensor.math_opt
and theano.tensor.basic_opt
, so those results were previously together in theano.tensor.opt
.
The function
broadcast_like
can be replaced with the newbroadcast_to
function in most—if not all—cases.The
broadcast_to
function uses view-based broadcasting instead of allocating filled arrays asbroadcast_like
does, so this change could introduce performance improvements in many general use situations (more specifically, when the basic algebraic optimizations that usebroadcast_like
are employed).