Closed shivadbhavsar closed 1 month ago
I think we actually have gpu fill
in migraphx after Paul's PR here: https://github.com/ROCm/AMDMIGraphX/pull/2908
I think we actually have gpu
fill
in migraphx after Paul's PR here: ROCm/AMDMIGraphX#2908
So it looks like hip::fill
takes value as an attribute. Since the fill value here can be a tensor (ie. output of some other non constant node) it wouldnt work for that case I think.
It also seems to take a buffer allocation instruction as input. Not sure we want to add that sort of low level instructions to a front end parser. It also means the parsed program cant be compiled using ref
Is there a functional difference between filling an array with a where
op under the hood vs. creating a literal with a single value and then using multibroadcast
and then contiguous
to allocate and copy it, as we do in various other places in torch_migraphx?
Is there a functional difference between filling an array with a
where
op under the hood vs. creating a literal with a single value and then usingmultibroadcast
and thencontiguous
to allocate and copy it, as we do in various other places in torch_migraphx?
I am assuming you're talking about the add_literal
call right?
The difference is that the input value
to the where op can be non-constant (ie. the single value is not known at compile time) and so it cannot be added as a literal.
Here the fill op can have a constant value, and in that case migraphx will just const fold the inserted where op anyways, and so we can just insert it this way without worrying about whether value is constant or not.
Using masked_fill migraphx op for now. Can be updated to use a new fill op if migraphx supports it in the future.