Closed cheon-research closed 4 years ago
@cheon-research My pretraining strategy is exactly here. https://github.com/XifengGuo/DEC-keras/blob/d40914fd034f1708483902554f395b9c87ee1304/DEC.py#L141 I didn't find dropout help in my experiments, so I removed it. If you find it useful for your own dataset, you can definitely add it back. Other normalization layers like batchnorm can also be applied.
I checked closed issue about acc. you replied pretraining strategy is different.
Can you explain different between this implementation and papers? (I checked the authors' pretraining method in paper, but,i can't find your strategy in this repository)
and in DEC paper, they used dropout, but why didn't you use dropout layer?