Open philippeller opened 6 years ago
Handling fp16 is complex, and it may not work with samples in diverse value ranges - this is the consequence of the precision loss. It certainly works for some cases which I tested; I need the data to conclude that there is nothing to be done or there is a hardcore calculation bug.
thanks for the prompt answer! I'd be more than happy to provide you a .npy
file containing the data (or a subset thereof) ....i could upload it to a github repo for example?
I have uploaded two small test data sets: they are identical data but in single and half precision. While the fp32
set converges in 5 iterations, the fp16
does not converge even after 100 iterations
fp16
: https://github.com/philippeller/retro/blob/directionality_phiipp/table_compression/testdata_fp16.npy?raw=true
fp32
: https://github.com/philippeller/retro/blob/directionality_phiipp/table_compression/testdata_fp32.npy?raw=true
Perfect, I will have a look in the following days.
Even though the fp16 unit tests run successfully, I cannot get clustering to work on my sample in fp16 mode.
When is use my data
.astype(np.float32)
the clustering output looks sth. like:Wuth the exact same data but
.astype(np.float16)
however i get:It does not converge....
I also tested to cast the data to fp16 and then back to fp32 to loose precision on the dataset, but that still converged fine.
Any ideas?
(I tried on Tesla P100 as well as Titan X and 1080 pascal cards)