Open jackmcrider opened 11 months ago
I just found the contributing guide and converted this to draft for now since I broke every single guideline.
@p16i I clicked somewhere and triggered a review request. Please ignore.
I tried to merge everything into one commit, extended documentation and made sure that all checks pass. Could be reviewed, but there are no functional changes.
Checks pass 👍
It's quite challenging to get reproducible tox results for tutorials (e.g. had to manually fiddle with metadata.kernelspec.name
in the raw .ipynb
before committing). Probably a limitation of tox, but could be documented for future contributors. I'm not sure where, maybe in Contributing#continuous-integration.
I'm gonna freeze this branch for now, unless something comes up.
I have commited a new version with roughly these changes:
LogMeanExpPool
MinPool1d
and MinPool2d
(simple inheritance from the PyTorch MaxPool*
classes)MinTakesMost1d
, MinTakesMost2d
, MaxTakesMost1d
, MaxTakesMost2d
KMeansCanonizer
to use MinPool1d
instead of LogMeanExpPool
MinTakesMost1d
at the output layerDistance
to PairwiseCentroidDistance
, do self.parent_module.add_module
instead of setattr(self.parent_module...
)I'm not sure if we want four rules for the *TakesMost* or if one rule with mode='max'
/mode='min'
and some autodetection for the 1d/2d is better.
Hi chr5tphr!
I started an attempt to implement (deep) neuralized k-means (https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9817459) as more people want to use it and ask for code.
I took the SoftplusCanonizer from the docs as a starting point.
Main changes:
Some things can be optimized:
(out[:,None,:] - out[None,:,:])[mask].reshape(K,K-1,D)
, cf. line 379-384 in canonizers.pySequential(Distance(centroids))
as a trick, but not idealwith Gradient(...) as attributor
; could be a bottleneck if number of clusters is largeCloses #198