Closed lachwot closed 3 years ago
There is a similar issue mentioned here https://github.com/wenbowen123/iros20-6d-pose-tracking/issues/10. It's likely that the arguments in predict.sh is not passed correctly in the predict.py. I will correct that when I find a chance. For quick fix, you can directly change the default params.
Cheers for the reply!
I'm checking those out at the moment, and I can see that predict.sh
has training paths in it.
Looking at Issue #10, I can see that your synthetic data is used as the training path (of which each object does indeed include a dataset_info.yml
file). I've updated mine to their correct respective paths.
Does there exist a dataset_info.yml
file for the "raw" YCB-Video Dataset?
For some context I should also mention that I only wish to infer on some of the YCB-Video dataset images (I'm aiming to rig the network to an RGB-D Kinect 2 camera feed with a ROS node), so I'm hoping to not have to download each object's large zipped synthetic data file. This would also mean I wish to track multiple objects at once (multiple objects inferred for each frame).
Other than that, I think I have the file running, at least for now :P
I might also take the chance to ask now, is it possible to have a zipped folder of the dataset_info.yml
and textured.ply
files for each of the YCB-Video objects uploaded somewhere? Just so I can mess around with those.
Cheers
good to see you are able to run. If you want to test on your own data from Kinect, just a reminder that the trained data are rendered using different instrinsic params from your Kinect's. Not sure if it still works. The recommended way is to re-generate some synthetic data using the same intrinsic params as your own camera. This is how we did for YCBInEOAT dataset.
Unfortunately I dont have such collections. You may have to download the zip folders of your interested objects.
Does this mean I'll have to retrain the network using data generated with my camera's own intrinsic parameters?
Or can I get away with knowing my camera's parameters (I've calibrated it), and use your weights on the "raw" YCB-Video dataset/ Kinect 2 feed? The original YCB-Video dataset almost certainty would have been filmed on a camera with different parameters to our cameras.
I know that the dataset_info.yml
files for each object have sections for the camera's intrinsic parameters though I don't know if these are exclusively for training. I presume that I can just update those camera parameters to what my Kinect2's parameters are, and then feed in my camera feed to the network.
Here is the current dataset_info.yml
file for now, with your original parameters.
boundingbox: 10
camera:
centerX: 312.9869
centerY: 241.3109
focalX: 1066.778
focalY: 1067.487
height: 480
width: 640
distribution: gauss
models:
- 0: null
model_path: /home/mu00185683/iros20-6d-pose-tracking/models/052_extra_large_clamp/textured.ply
object_width: 238.6939428307838
resolution: 176
train_samples: 200000
val_samples: 2000
To get the better performance, it's better to regenerate the synthetic data using your latest camera params and retrain on it. But I haven't tried directly updating the camera params. It might work if your params doesn't differ too much, otherwise the images captured by two very different cameras may look drastically different. Since I haven't tried in practice, these are my guessings.
Ah, I see. I was hoping to avoid having to generate my own data, so I'll stick to the dataset_info.yml
files that you have provided as a part of your data for now.
Thanks for all the help!
You can check YCB-Video and YCBInEOAT dataset and choose whichever's intrinsics that matches closer to your own camera.
@lachwot Hi Dr.Wen. So I also have the similar question. How could you generate the dataset_info.yml files. In the raw ycb-video dataset, there are not such files. Thanks in advance.
Hi! When running
predict.py
, I get the following error if I don't specify a path fortrain_data_path
:This makes sense, given that
None
as a path yields nothing. Settingtrain_data_path
to where I have set up my dataset however, gives me the same error asdataset_info.yml
cannot be found in my YCB-Video dataset folder.I can confirm that I have downloaded the dataset correctly since my teammate and myself have the same files and folders within the
YCB_Video_Dataset
folder.Should I have a
dataset_info.yml
file somewhere? If not, how should it be set out? Is there a file that I can download? What should myYCB_Video_Dataset
folder look like? Here is my current directory withinYCB_Video_Dataset
:I should also mention that my dataset folder is separate from this repository's folder due to me needing it to test other 6D pose estimator networks. Looking at the code in more detail suggests that
dataset_info.yml
is required for predictions as it contains camera parameters, image resolutions, bounding box info, etc.Cheers, Lachlan