Open chewxy opened 7 years ago
From @kabaka0 on June 28, 2017 21:8
I can see the logic behind the split of tensor - the package is currently quite large, which made it harder to locate methods. As for the split of Dense, The one problem I can see with having MaskedDense is that a quite a bit of code would be duplicated. I suppose that we could embed Dense in MaskedDense to minimize this. It would not have worked with my initial attempt at Masked (in which the mask was allowed to have an iterator with different stride from the data), but given the decision to drop that, it can be done.
Modifying genlib to support such a change would not need much code editing though, given that there already are checks for whether the object is masked or not.
Can you see any other downside to the embedding?
I was thinking precisely embedding
type MA struct {
*Dense
mask []bool
...
}
(I'll do the heavy lifting.. given I'm working on the sparse and colmajor Tensors) you just provide the struct you have in mind @kabaka0
From @kabaka0 on June 28, 2017 21:15
That works.
type MA struct {
*Dense
mask []bool
fillValue interface{}
...
}
From @kabaka0 on June 28, 2017 21:16
I am happy to help with it, but if you are actively editing the package then I can leave it to you for now, but feel free to ask me to do stuff whenever needed.
From @chewxy on June 27, 2017 2:0
@kabaka0 thoughts? The idea is to as per #128 - the
tensor
package holds the data structures, and the essential methods (basically implementingTensor
), while things likeAdd
will get moved totensor/ops
Tasks
internal/stdeng
can be accomplished usingFlatMaskedIterator
*Dense
and*MaskedDense
Copied from original issue: chewxy/gorgonia#131