jackd / human_pose_util

Utility files for human pose estimation in python
24 stars 3 forks source link

Issue with human_pose_util/dataset/eva/skeleton.py #1

Closed pavanteja295 closed 5 years ago

pavanteja295 commented 5 years ago

Hey Jack, I have been trying to run human_pose_util/dataset/eva/raw_tree/raw_tree.py and I'm not able to find the function native_to_s16 used in raw_tree.py

To quote the ouput this is what I get :

File "raw_tree.py", line 334, in show
    p3_world_16, p3_world_14 = convert(view.p3_world[image_frame])
  File "raw_tree.py", line 330, in convert
    p16 = native_to_s16(native)
NameError: name 'native_to_s16' is not defined

I tried to check the function but its not existing in the file.

Can you please help me out in using your repo

Thanks!

jackd commented 5 years ago

Hi Pavanteja, I've seen this and promise I'll get back to it - things pretty hectic at work for the rest of the week though, sorry for delay. If you're desperate it's probably something I accidentally deleted after I'd done the conversion, so it'll likely be in the git history somewhere - otherwise I'll sort it out in a week or so.

pavanteja295 commented 5 years ago

Hey Jack, Thanks for the quick reply. Can you at least tell me what th function does on the whole. If I understand correctly annotations of humaneva has a different joint names than the ones used in general and u want to convert the given joints into the general joints ? Lemme know if this is the case

jackd commented 5 years ago

Just pushed fix. I changed interfaces at some point to using SkeletonConverters, didn't fix the example code scattered around the place. Should have used the parent directories skeleton.s20_to_s16_converter().convert(native) - native is the native skeleton (i.e. skeleton provided by the original dataset) with 20 joints, where as s14/s16 have 14/16 joints.

Disclaimer: you'll probably find a fair few issues like this. Feel free to file them and I'll get to them when I get a chance, but it won't be a high priority for the next week. Good luck.

pavanteja295 commented 5 years ago

Thanks a lot for such a quick fix. Just one doubt I have is how do you convert Image_data which are the video files into images which I want to use them for future use. Also can I use the hdf5_tree.py to convert the uncompressed files to a hdf5 file ?

jackd commented 5 years ago

I know I tried doing that once, but I ended up concluding it was a bad idea - video compression is best, and if you try to save raw data it will explode to an unmanagable size. If might work if you wanted to only do a subset of the data - every 10th frame or something - but the size still ends up being quite unmanagable if you're not smart about it. I haven't revisited it since I've done some work with imagenet and learned some things (feel free to check out this script from my imagenet repo that saves externally compressed image data as vlen hdf5 data. Don't try and save frames in individual datasets - you'll get this behaviour), but I can guarantee I haven't implemented anything like that in here.

pavanteja295 commented 5 years ago

Hey thanks a lot for the information and such a interactive issue resolving. Last question I have is I think u haven't downsampled any annotations which are stored in hdf5. But I tried to extract frames from the videos provided using ffmpeg using 60 frame rate which is given in the paper. Suprisingly the number of frames in the hdf5 file donot somehow match with the number of images present after extracting the video. Any idea about this ?

jackd commented 5 years ago

I observed the same thing, but the difference was only a few frames if a recall correctly - can't remember exactly how I reconciled it - think I just trimmed the last few frames after visually verifying I couldn't really tell the difference between trimming start and end frames.

pavanteja295 commented 5 years ago

Yeah thanks for your suggestion. I was able to create it. One doubt I have is in meta.py you have the partition which shows partition of the frames so is this partition is training and validation partition ? If not how can I find train and validation split ?

jackd commented 5 years ago

... ... ... yep, should have docuemented that better. 36 hours to (different) deadline so I won't address it properly now, but I recall the numbers coming straight from the original EVA paper. From memory, and based on the limited comments I have there, S1/Walking/Trial 1 frames[:590] were validation, while frames[590:] were training, while trial 2 was entirely for testing and trial 3 entirely training (total frame counts below: 1180, 980, 3238).