Open its-sandy opened 3 years ago
@its-sandy Dear Mr. its-dandy,
I want ask you why you are work on PyTorch, and I want ask you if you get the same result in paper or not. I try to run the code along time and always give me Killed.
Also, Are you found weight and savedweight files because I can't find it. I very need to run the code .
Hi there! I am working on a PyTorch implementation of SLIDE. I'm currently trying to compare its performance against SLIDE. I'm faced with a few doubts/issues while evaluating SLIDE, and need clarifications for the same.
https://github.com/keroro824/HashingDeepLearning/blob/3cebe6f99a5454bef6f241dea804e07e0d075484/SLIDE/LSH.cpp#L82 The hashes are combined as index += h<<((_K-1-j)(int)floor(log(binsize))); But, if the hashes are to simply be concatenated, shouldn't it instead be index += h<<((_K-1-j)(int)ceil(log2(binsize))); However, for binsize = 8, I also observe that shifting by floor(log(binsize)) = 2 bits gives better convergence than shifting by ceil(log2(binsize)) = 3 bits. Is this intentional? Why is this the case?
There appears to be a bug in WTA hash . https://github.com/keroro824/HashingDeepLearning/blob/3cebe6f99a5454bef6f241dea804e07e0d075484/SLIDE/WtaHash.cpp#L57