Closed monajalal closed 1 year ago
so, I am trying to run natively, for any potential novel test object of interest that you have. Could you please share the complete command along with the necessary artifacts such as dataset_info.yaml?
(tracknet) mona@ard-gpu-01:~/iros20-6d-pose-tracking$ python predict.py
ckpt_dir: None
dataset_info_path None/../dataset_info.yml
Traceback (most recent call last):
File "/home/mona/iros20-6d-pose-tracking/predict.py", line 633, in <module>
with open(dataset_info_path,'r') as ff:
FileNotFoundError: [Errno 2] No such file or directory: 'None/../dataset_info.yml'
your current dataset_info.yaml
shows:
camera: #Intrinsic params
height: 480
width: 640
focalX: 1.066778000000000020e+03
focalY: 1.067487000000000080e+03
centerX: 3.129868999999999915e+02
centerY: 2.413109000000000037e+02
train_samples: 200000 # Number of training samples
val_samples: 2000 # Number of validation samples
max_translation: 0.02 # max possible translation in meter bewteen consecutive images in your video
max_rotation: 15 #max possible rotation in degree bewteen consecutive images in your video
boundingbox: 10 # Bounding box padding percentage. No need to change
resolution: 176 # Resolution of the image
distribution: gauss # no need to change this
renderer: pyrenderer
models:
0: # Index in rendered segmentation image
model_path: /home/se3_tracknet/object_models/bunny/1.ply # Path to your CAD model
blender:
range_x: [-0.3,0.3]
range_y: [-0.3,0.3]
range_z: [0.4,0.9]
env_light_range: [0.3,5]
env_light_color: [[0,0.05], [0,0.05], [0,0.05]] #color type float
max_lamp_num: 3
lamp_brightness: [0.1,1]
lamp_colors: [[0.5,1], [0.5,1], [0.5,1]]
lamp_pos_range: [[-3,3], [-3,3], [-2,0]]
texture_folders: # Images to use as background during syn data generation
[
'/media/bowen/e25c9489-2f57-42dd-b076-021c59369fec/DATASET/dtd/images/**/*.jpg',
'/media/bowen/e25c9489-2f57-42dd-b076-021c59369fec/DATASET/ETH_Synthesizability/**/*.jpg',
]
and inside predict.py
we have train_data_path
arg. What should I set it to in order to use your inference data of novel objects in order to see a demo of your code?
Also not sure if this is related, but I get this error:
(tracknet) mona@ard-gpu-01:~/iros20-6d-pose-tracking$ python produce_train_pair_data.py
Computing object width
Traceback (most recent call last):
File "/home/mona/iros20-6d-pose-tracking/produce_train_pair_data.py", line 231, in <module>
completeBlender()
File "/home/mona/iros20-6d-pose-tracking/produce_train_pair_data.py", line 158, in completeBlender
mesh = trimesh.load(dataset_info['models'][0]['model_path'])
File "/home/mona/anaconda3/envs/tracknet/lib/python3.9/site-packages/trimesh/exchange/load.py", line 116, in load
) = parse_file_args(file_obj=file_obj,
File "/home/mona/anaconda3/envs/tracknet/lib/python3.9/site-packages/trimesh/exchange/load.py", line 630, in parse_file_args
raise ValueError('string is not a file: {}'.format(file_obj))
ValueError: string is not a file: /home/se3_tracknet/object_models/bunny/1.ply
By novel object, do you mean the objects other than YCB? If so, you need to first generate the synthetic and train the model on it.
From what I understand, I need to have two versions running, one with roscore and one that runs the inference.
However, when I run roscore, the other one exits or vice versa,
also, netifaces was not installed in the docker.
By the way, do you have instructions for native setup (not docker setup) not using ros?
I want to see how the inference on novel objects works. And I assume novel here means both novel poses, novel objects, and novel classes, am I right?
Can you show a minimal example of how to use your pretrained model to get 6D pose of novel objects?