Closed stowerslab closed 2 years ago
After investigating the problem further, it appears that csv_to_npy output isnt accepted by vame.create_trainset() because it sets values below the pose confidence to NaNs.
After fixing the issue in #67 locally, setting pose_confidence to 0 doesnt produce any NaNs in the generated NPY which vame.create_trainset() happily accepts. setting pose_confidence to any other value generates NaNs which will cause vame.create_trainset() to throw empty sample array errors.
this function may have been not updated to match the output of egocentric alignment (which im guessing is the function that is most often used before vame.create_trainset() ), so besides interpolating the NaNs, i would check if there are other things that need to be added
Thank you for posting this! We updated the csv_to_numpy.py
function and hope that this issue is resolved. But we will extensively check this with the newer version of VAME in a few months.
Cheers, Kevin
This issue seems to be separate from #67 and I am not sure what is going on here. After identifying a bodypoint that was problematic and throwing that out and rebuilding the npy using vame.csv_to_npy (and modifying the num_features in config.yaml, Im still getting vame.create_trainset() errors as shown below. The npy file itself seems fine (theres 26 "columns" for 26 features from 13 bodyparts). Is the interpolation a problem if there are too many NANs in a row?
edit just calculated the average likelihood and most of the points are in the 95-99% range, with 3 body points in the 75% to 86% range. I could go back and edit my local version of csv_to_npy in #67 so I can play around with the pose confidence if that is the most likely reason why the array is empty errors are popping up
Setting robust: false throws a different error, but now it seems to load the 2nd file as well