lvsn / deeptracking

Deep 6 DOF Tracking
Other
35 stars 22 forks source link

Thread error : 'image_size' #4

Closed mingliangfu closed 6 years ago

mingliangfu commented 6 years ago

Hi,

I am trying to rerun this project. After preparing for the dataset, I activate train.py file to traing a torch model. The program blocked in line 189/190 with the following output:

Thread error : 'image_size' Process Process-13: Traceback (most recent call last): File "/home/sia/anaconda3/envs/py36/lib/python3.6/multiprocessing/process.py

", line 258, in _bootstrap self.run() File "/home/sia/anaconda3/envs/py36/lib/python3.6/multiprocessing/process.py

", line 93, in run self._target(*self._args, **self._kwargs) File "/home/sia/deeptracking/deeptracking/data/parallelminibatch.py

", line 65, in worker_ batch = self.load_minibatch(task) File "/home/sia/deeptracking/deeptracking/data/dataset.py

", line 243, in load_minibatch return image_buffer, prior_buffer, label_buffer UnboundLocalError: local variable 'image_buffer' referenced before assignment

It seems to be associated with the parallelminibatch module. But I don't know how to resolve it.

MathGaron commented 6 years ago

Hey,

The error is not clear indeed. It is associated with the dataset object which inherits from parallelminibatch. My guess would be that there is something wrong with the viewpoints.json of your data. "image_size" is the pixel size of a sample, located at the key: metadata of viewpoints.json.

The error is so-so because it happens in a different process (loader processes), and it fails around line 233 of dataset.py.

Said differently: make sure that viewpoint.json has a metadata/image_size field. Or make sure that you have a valid dataset path in your configuration files.

If you find what the problem was, please provide me some feedback so I can provide a better error message in this case.

mingliangfu commented 6 years ago

Hi @MathGaron, thanks for your tips. As you infered, the corresponding json file of the validation data was destroyed in the validation loop. All on account of switchover (switch to the win7 OS) on an ubuntu-win7 machine.

Myzhencai commented 6 years ago

@mingliangfu I am trying to rerun the project too.but I have the issue with the config json files.

{ # This is a list of all 3D models to render (support more than 1) "models": [

{
  "name": "skull",
  "model_path": "3dmodels/skull/geometry.ply",
  "ambiant_occlusion_model": "3dmodels/skull/ao.ply",
  "object_width": "250" # cropping width of the model (mm)
}
],

"camera_path": "sequences/skull/camera.json", "shader_path": "/home/gaofei/libfreenect2/src/shader", "output_path": "/home/gaofei/outputd", "preload": "False", # True or False : if True will append to data already contained in output path else overwrite "save_type": "numpy", # numpy or png, trade off between load speed and space "sample_quantity": "100000", # quantity of sample per model "image_size": "150", # pixel width/height of the samples

"translation_range": "0.02", # max translation (m) "rotation_range": "10", # max rotation (degree) "sphere_min_radius": "0.4", # min distance from camera "sphere_max_radius": "1.5" # max distance from camera } as you can see i have changed the path value to the right place but when I run : python generate_synthetic_data.py -c generate_synthetic_example.json I get this error: Traceback (most recent call last): File "generate_synthetic_data.py", line 25, in data = json.load(data_file) File "/usr/lib/python3.4/json/init.py", line 268, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/usr/lib/python3.4/json/init.py", line 318, in loads return _default_decoder.decode(s) File "/usr/lib/python3.4/json/decoder.py", line 343, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.4/json/decoder.py", line 359, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Expecting property name enclosed in double quotes: line 3 column 5 (char 7) any suggestions ,appreciate you time :-).

MathGaron commented 6 years ago

hi Myzhencai, It is an error in your json. Did you let the comments (ex. #blabla). If it is the case, json does not support these as a comment. I put these as a documentation string but I realize it was a poor choice because it breaks the json itself...

Remove the #... in the json, it should work. Hope it helps!

Myzhencai commented 6 years ago

@MathGaron ok,i will try ,thanks:-).

Myzhencai commented 6 years ago

@MathGaron Hi, this is my json file and the error prompt message: Thanks for the help of MathGaron that I can make my json file work. But when I am meeting the other issue like this when I run the command :python generate_synthetic_data.py -c generate_synthetic_example.json I got a sceene like show bellow in pic3 : issue.pdf

MathGaron commented 6 years ago

The NotImplementedError is raised in plyparser. I did not provide the loading code in the package because of some license conflicts. You can use this implementation or implement your own plyparser.py.

Myzhencai commented 6 years ago

@MathGaron thanks for your reply.Have a nice day.

Myzhencai commented 6 years ago

@MathGaron

This is a list of all 3D models to render (support more than 1)

"models": [

{
  "name": "model_name",
  "model_path": "path/to/geometry.ply",
  "ambiant_occlusion_model": "path/to/ao.ply,
  "object_width": "250" # cropping width of the model (mm)
}
],

there seems have lost the " in the path/to/ao.ply,wish you can add it .have a nice day :-)

MathGaron commented 6 years ago

Just committed the changes, thank you!

Myzhencai commented 6 years ago

@MathGaron this is the snapshot for running generate_synthetic_data.py it seems the viewpointrender windows is dying ,show nothing ,should i install some software for the ply files or something else?for example should i modify the modelrenderer.py 未命名 6.pdf

MathGaron commented 6 years ago

Hey, this is a normal behavior with the current implementation. At that time, a window is used to do the rendering, but we do not show the results on it. As long as the files are saved it should be fine. You can change the OpenGL code to write in an FBO (frame buffer). But this is usually not a problem to have a dead window during data generation!

Myzhencai commented 6 years ago

@MathGaron i see ,thanks for your reply

Myzhencai commented 6 years ago

@MathGaron i am trying to run the test_sequence.py with my json file set like this (i know some place might wrong ):the model path :"model_file": "/home/gaofei/torch/install/share/lua/5.1/pl/class.lua",
{"name": "clock", "model_path": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/3dmodels/clock/geometry.ply", "ambiant_occlusion_model": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/3dmodels/clock/ao.ply", "object_width": "250" } ], "model_file": "/home/gaofei/torch/install/share/lua/5.1/pl/class.lua",
"output_path": "/home/gaofei/output1", "shader_path": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/deeptracking/data/shaders", "model_path": "/home/gaofei/output", "video_path": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/sequences/clock", "reset_frequency": "0",
"closed_loop_iteration": "3", "save_frames": "False",
"save_video": "True",
"show_axis": "False",
"show_depth": "False",
"show_zoom": "True",
"use_sensor": "True", "COMMENT": "Sensor only", "detector_layout_path": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/deeptracking/detector/aruco_layout_tiny.xml", "sensor_camera_path": "/home/gaofei/ros/indigo/ros_ws/src/deeptracking/sequences/turtle/camera.json", "offset": "file" } and i get the error (i want to change the wrong set down of the json file ):it seems i do not have the .luarocks file ,by the way i just fllowed the instruction in the readme of hughperkins pytorch and i can import PyTorch ,then i just refer the github of pytorch and installed it so i can import torch,does this make the issue happen or something else ?could you please share some specific details about the model path?sorry for bother you so much ,i am new and this is the final exam of my class ,hoping for your reply .have a nice day.best wishes from China.

Myzhencai commented 6 years ago
no field package.preload['/home/gaofei/torch/install/share/lua/5.1/pl/class']
no file '/home/gaofei/.luarocks/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.lua'
no file '/home/gaofei/.luarocks/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class/init.lua'
no file '/home/gaofei/torch/install/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.lua'
no file '/home/gaofei/torch/install/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class/init.lua'
no file './/home/gaofei/torch/install/share/lua/5/1/pl/class.lua'
no file '/home/gaofei/torch/install/share/luajit-2.1.0-beta1//home/gaofei/torch/install/share/lua/5/1/pl/class.lua'
no file '/usr/local/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.lua'
no file '/usr/local/share/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class/init.lua'
no file '/home/gaofei/torch/install/lib//home/gaofei/torch/install/share/lua/5/1/pl/class.so'
no file '/home/gaofei/.luarocks/lib/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.so'
no file '/home/gaofei/torch/install/lib/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.so'
no file './/home/gaofei/torch/install/share/lua/5/1/pl/class.so'
no file '/usr/local/lib/lua/5.1//home/gaofei/torch/install/share/lua/5/1/pl/class.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/home/gaofei/torch/install/lib//home/gaofei/torch/install/share/lua/5.so'
no file '/home/gaofei/.luarocks/lib/lua/5.1//home/gaofei/torch/install/share/lua/5.so'
no file '/home/gaofei/torch/install/lib/lua/5.1//home/gaofei/torch/install/share/lua/5.so'
no file './/home/gaofei/torch/install/share/lua/5.so'
no file '/usr/local/lib/lua/5.1//home/gaofei/torch/install/share/lua/5.so'
no file '/usr/local/lib/lua/5.1/loadall.so')