Open Ajk4 opened 4 years ago
IIRC these ops were implemented by @xuhdev , please feel free to comment whether you think it makes sense or not. Thanks!
It perfectly makes sense to me! In fact, it would be great if there is a way to automatically turn any associative operators into a dimension reduction operators.
It perfectly makes sense to me! In fact, it would be great if there is a way to automatically turn any associative operators into a dimension reduction operators.
Hi @xuhdev
Do you perhaps have those op implementations available somewhere? I would be very grateful if I could use those in my code!
Also is there any chance of merging it to pytorch in the future?
Cheers!
@Ajk4 I'm not aware of any such implementation. I was simply suggesting that this might be a favorable structural change in the future :) This issue looks perfectly reasonable to me.
It would be nice to have these operators. Is it being prioritized?
How about looking at NumPy's implementation? E.g, to reduce with mutiplication, they can write something like:
np.multiply.reduce([2,3,5])
Also looking for an efficient way to do this
🚀 Feature
Function for reducing tensor along specified dim with bitwise operations.
Motivation
In my project I have a need to reduce my tensors along some dimensions with a bitwise operations.
Pitch
In pytorch I can reduce my tensor along dim in multiple ways (like t.min(dim=0), t.sum(dim=0), t.any(dim=0), t.all(dim=0). Unfortunately it's not yet possible to reduce dimension with a bitwise operation like bitwise_or, bitwise_xor, bitwise_and.
Possible method headers could look like this:
Currently in BoolTensor there are two special methods
any(dim)
andall(dim)
that implements logical or/and reduction. Bitwise_or/bitwise_and could be a generalization of those two to other tensor types. (Similarly as & operator that is a bitwise operation for non Bool tensors, and a logical one for BoolTensor)Possibly loosely connected to https://github.com/pytorch/pytorch/pull/26824 - however it seems like it's only a pytorch distributed reduction method, not a tensor API one.
Alternatives
I implemented binary reducing operations in python using builtin pytorch functions with a loop along dimension dim. I imagine that implementing those directly in C++/CUDA could yield performance boost.