lvsn / deeptracking

Deep 6 DOF Tracking
Other
35 stars 22 forks source link

still issues about generate data . #9

Closed Myzhencai closed 6 years ago

Myzhencai commented 6 years ago

hi, @MathGaron as you mentioned that I need to have a look of the Opengl and I do reinstall everything in a new computer but i still get the same issues .here in pic1 I used to test the opengl it is fine: default and then i run it with the skull ply files then i get this : with no error no warning and what I think it can not be the json file issue as you can see in the picture .could it be the issue of plyparser or any thing you do not mention in the project?BTW,could you share the workflow of install dependences or just share the generated data .That can be of great help with my issue. default default

and in the plyparser.py this part never used in the code why should we import PlyElement? wish for your feedback,I only have just 3 days for the project . btw,can the generate black picture be the training data as it cotains the pose information? """ @staticmethod def save_points(points, path): vertex = np.zeros(points.shape[0], dtype=([('x', 'f4'), ('y', 'f4'), ('z', 'f4')])) vertex.fill(255) vertex['x'] = points[:, 0] vertex['y'] = points[:, 1] vertex['z'] = points[:, 2] el = PlyElement.describe(vertex, 'vertex') PlyData([el], text=ascii).write(path) """

Myzhencai commented 6 years ago

default When i add the print(rgbB), I got this things

MathGaron commented 6 years ago

you can view the images with matplotlib or cv2.imshow, it is easier than looking at the print.

cv2.imshow("test", rgb[:, :, ::-1])
cv2.waitKey()
# or
plt.imshow(rgb)
plt.show()
plt.imshow(depth)
plt.show()

I am sorry but I can hardly send you an already generated dataset right now, I am working on another version and approaching a deadline.

That said, it seems that most of your setup works, but it seems like the 3D model is not loaded. Your json files seem also fine.

You could try this camera.json file instead :

You can verify that in modelrenderer.py, setup buffer, the vertex information are properly loaded.

Myzhencai commented 6 years ago

@MathGaron ,thanks for help.I will try something more with your suggestions.Have a nice day.

Myzhencai commented 6 years ago

@MathGaron here is the every line start from . i think it is not the issue about your code ,i can be the gpu issue as there are many buffers for gpu ,and my gpu is like this .can you check something unnormal in the workflow.Maybe i need to change another gpu to have a try.BTW,i do changed the camera.json file and it is the same issue and the code to absorbed the change of the json as the widows changed to the size of the camera parameters.looking forward your feedback.thanks : -). for model in MODELS: vpRender = ModelRenderer(model["model_path"], SHADER_PATH, dataset.camera, window, window_size) vpRender.load_ambiant_occlusion_map(model["ambiant_occlusion_model"]) default

2018-01-26 10 26 20 2018-01-26 10 27 04 2018-01-26 10 28 58 2018-01-26 10 48 21 2018-01-26 11 03 55 2018-01-26 11 04 47 2018-01-26 11 05 43 2018-01-26 11 08 36 2018-01-26 11 11 09 2018-01-26 11 20 26 2018-01-26 11 26 58 2018-01-26 11 28 42 2018-01-26 11 53 11 2018-01-26 11 54 13 2018-01-26 11 56 49 2018-01-26 12 00 20 2018-01-26 12 18 00

2018-01-26 12 53 51 2018-01-26 12 55 12 2018-01-26 14 29 14 2018-01-26 15 57 05 2018-01-26 15 58 26 2018-01-26 16 02 16 2018-01-26 16 03 51 2018-01-26 16 15 53 2018-01-26 16 36 11 2018-01-26 17 01 34 2018-01-26 17 29 41 2018-01-26 17 49 05

MathGaron commented 6 years ago

Ok the issue seems to be with the normal buffer. Actually, you are not supposed to receive any NAN values in there, that would explain the weird triangles that you get as output I guess.

So first, make sure that you have the same model files (.ply) as the one on our website, and/or make sure that the 4 objects give you the same problem. If you still have the problem, then there is something wrong in the code.

As I told you earlier, I think I only tested with python3, so if you are using python2, it is possible that you get weird numerical bugs like this one.

If you still want to use python2 ( But I strongly suggest using python3!) the bug is potentially related to the line 50 in modelrenderer.ply. At this point, I normalize the normal vectors, and for some reason, the vector norm could be 0 which generate a division by zero error.

Myzhencai commented 6 years ago

hi, @MathGaron , I use python3.4 and all the models have the same issue (i have checked the ply files in windows it is wonderful just as the paper and ppt)and as I install almost every dependences i get the mention that it is link to .....navida...,so i guess it must be gpu issue .cause it can not compute out the right number.any way, i will try in a workstation with gpu quadrom2000m . i will share the result with you tomorrow. have a nice day .it is 22:36 in china

MathGaron commented 6 years ago

Hey, I don't think it is a gpu related bug. I would focus on understanding why do the normals contain nan values. So the bug seems to be related to the 3D model loading, and not with the rendering. Make sure that the function loads the normals correctly, and if they do, make sure that the normalization is fine and contains no 0 norms.

Myzhencai commented 6 years ago

hi, @MathGaron ,according to your guide I just tested the "he bug is potentially related to the line 50 in modelrenderer.ply."here is the result it seems it is the overflow of this line make all issues.I try to change the type to np.float64 but failed with missing or with can not find nx, ny,nz .Do you think this is a issue with code or i just need to change the platform (gpu) for generate data? 2018-01-27 08 37 04 2018-01-27 08 38 17 2018-01-27 08 52 54 ![Uploading 2018-01-27 08:58:01屏幕截图.png…]() ![Uploading 2018-01-27 09:00:16屏幕截图.png…]() ![Uploading 2018-01-27 09:06:47屏幕截图.png…]() ![Uploading 2018-01-27 09:07:48屏幕截图.png…]() ![Uploading 2018-01-27 09:08:26屏幕截图.png…]() ![Uploading 2018-01-27 09:11:26屏幕截图.png…]() ![Uploading 2018-01-27 09:22:09屏幕截图.png…]() ![Uploading 2018-01-27 09:27:10屏幕截图.png…]() ![Uploading 2018-01-27 09:29:05屏幕截图.png…]() ![Uploading 2018-01-27 09:34:02屏幕截图.png…]() ![Uploading 2018-01-27 09:35:53屏幕截图.png…]() ![Uploading 2018-01-27 09:40:38屏幕截图.png…]() ![Uploading 2018-01-27 09:42:18屏幕截图.png…]() ![Uploading 2018-01-27 09:44:33屏幕截图.png…]() ![Uploading 2018-01-27 09:46:12屏幕截图.png…]()

MathGaron commented 6 years ago

I am not sure what you are printing in your screenshots, but it seems like it is the normals after calling plyparser. I am also not sure why there are more than 3 values on axis 1. That said, there are huge values that should not be there (xe-38). I will try to get some time today to reproduce this and update you later.

Myzhencai commented 6 years ago

hi,the main of all the thing i did is that i first use the test.ply (made by myself ) to test the normals after calling plyparser then i print the normal after this code: '''normals = normals / np.linalg.norm(normals, axis=1)[:, np.newaxis]''' and no matter which ply file i change it still show the error like the image, but when i changed the code in plyparser in this code : """ def get_vertex_normals(self): for element in self.data.elements: if element.name == "vertex": try: return PlyParser.recarray_to_array(element.data[["nx", "ny", "nz"]], np.float32)(i changed it to np.float64) """ as you can see in the code i marked with red ,when i do not change it it show 3 number with warning ,when i changed it ,then it show 2 number without warning but missing one. image

Myzhencai commented 6 years ago

and sorry for that , i can not run in the workstation as i mentioned cause my classmates just make the system crashed by installing the cuda.oh my god!!

MathGaron commented 6 years ago

Can you send your custom .ply, there is something wrong with it. The normals should be a Nx3 array with information about every vertex normals. Currently, I can't reproduce your bug.

Myzhencai commented 6 years ago

OK,i will share to you tomorrow will, acturely I just modified the geometry.ply of dragging. Change d the nubumer of element to 5 and the nx ny nz to 1 2 3 ...

Myzhencai commented 6 years ago

And I just set print normal[0:1,I do not remember the data of this setting, but It just make the printing is just the three normal value]

Myzhencai commented 6 years ago

ply format ascii 1.0 comment VCGLIB generated element vertex 5 property float x property float y property float z property float nx property float ny property float nz property uchar red property uchar green property uchar blue property uchar alpha property float quality element face 2 property list uchar int vertex_indices end_header -0.0758073 0.0147489 -0.00880594 1 2 3 191 191 191 255 0.19343 -0.0757646 0.0144919 -0.00830121 4 5 6 175 175 175 255 0.177557 -0.0761136 0.0143482 -0.00857608 7 8 9 192 192 192 255 0.193868 -0.0761937 0.015855 -0.0102308 10 11 12 184 184 184 255 0.185957 -0.0760448 0.0155081 -0.00982756 13 14 15 192 192 192 255 0.194475 3 39499 39298 39485 3 37863 39485 39298 image just little change of the ao.ply for 3dmodels of dragon.

Myzhencai commented 6 years ago

@MathGaron can you share your result of this ,thanks you a lot .maybe i should wait for you for some time today.

MathGaron commented 6 years ago

I tried this same file, if I print the normals I get what is expected:

[[  1.   2.   3.]
 [  4.   5.   6.]
 [  7.   8.   9.]
 [ 10.  11.  12.]
 [ 13.  14.  15.]]

Did you change the code? are you sure that you did not change anything in the plyparser code? I am not able to replicate the nans you showed earlier.

Myzhencai commented 6 years ago

@MathGaron no i did not change anything ,what you shared is the first one normals right ,mine issue is about the second one .when i run the first one ,it is just like you did. but when i run the second one it shows the non and the warning of overflow . normals = plyparser.get_vertex_normals() normals = normals / np.linalg.norm(normals, axis=1)[:, np.newaxis]

MathGaron commented 6 years ago

Can you give me your numpy's version? and your result of the normals right after the normalisation (after the second line)

Myzhencai commented 6 years ago

@MathGaron sorry for delay, that time (10 hours ago) i have go back to the students' dormitory .here is the result you want .BTW, i did small change to the ply file in 4th picture(no change in the first three pictures !!!): 2018-01-30 19 20 36 2018-01-30 19 21 19 2018-01-30 19 24 18 2018-01-30 19 40 53 2018-01-30 19 41 33

Myzhencai commented 6 years ago

@MathGaron Can this be a compiling issue that the np.float32 shared small restore memory for the caculated number??

MathGaron commented 6 years ago

Ok I did not notice it at first, but you changed the normals array type to float64, while it should be read as an array of float32, that may explain the weird values you get in your arrays. Then your normals should be computed normally. linalg should not have any 0s or huge values.

Myzhencai commented 6 years ago

@MathGaron hi, you seems misunderstanding what i said , if i did not change the float32 and i get the first two pics, if i changed it i get the last 2 pics.so i guess it is the issue of compiling. : -)

Myzhencai commented 6 years ago

@MathGaron should i change float32 to float64 ? if i do,it seems i will lost a data .

Myzhencai commented 6 years ago

the same issue occurred when i changed to a new gpu,this should be dependence or complie issue

MathGaron commented 6 years ago

The GPU as nothing to do with the .ply loading, I have tested with the same numpy version, on different versions of python and everything seems to work on my side.

So first, the array have to be shaped like a float32, else you will have a bug. Also, I saw that you are printing with normals[0:5, -5:-2]. There is something wrong here, the normal array is supposed to be an nx3 as I specified earlier, this could be the reason why linalg.norm fails.

I would recommend you to download the code/dataset again to make sure that you start from a fresh version. I will close this issue as I am unable to replicate your bug.

Myzhencai commented 6 years ago

OK,thanks

Myzhencai commented 6 years ago

@MathGaron can you share the plyfile version?

MathGaron commented 6 years ago

I am using 0.4, I see that there is a 0.5 version that I never tested, are you using the same version? You still have the same problem after recloning the repo and downloading the datasets?

Myzhencai commented 6 years ago

well I use the 0.5 version,and i just pip install it.can this be the reason? Maybe I will have a test tommorrow ,I think my issue is just the plyparser issue ,so I will focus on it.

Myzhencai commented 6 years ago

@MathGaron yeah , same issues , I have made up a wiki for your project ,if you think it is useful maybe you can put it in your wiki or readme to make everything specify : -). https://github.com/Myzhencai/deeptracking/wiki/ALL-DETAILS-OF-deeptracking-project. if i miss cite anything please make me known thanks for your help, i will keep on this project to make it work.

MathGaron commented 6 years ago

Ok well, I would definitely try 0.4 as 0.5 seems to have some breaking changes, and indeed the problem seems to be related to the ply file reading. Tell me about it so I can update the documentation.

In your wiki, you should maybe mention that you had some difficulty generating the data, thus without data, you can not expect a proper training.

Also, there is currently a docker file in the project for the training part, but none for the dataset generation, which could be a nice add-on/documentation for the installation process.

Myzhencai commented 6 years ago

i yesterday have generated the docker images for almost all the dependences but I failed to commit it ,just a few days (tomorrow I will go back home from school,)I will push it to the docket hub and share to you . It is just based your docker images

Myzhencai commented 6 years ago

@MathGaron same things happen even changed the version to 0.4 maybe i need to modify the plyfile

Myzhencai commented 6 years ago

@MathGaron I just installed the dependences for the plyparser.py and test it with the file in this part : https://github.com/mingliangfu/deeptracking/blob/master/tests/data/basic.ply and this part : https://github.com/mingliangfu/deeptracking/blob/master/tests/data/basic_color.ply and the first one is normal but the second one is weired , maybe you can have a test .here is the result of mine:plyfile version 0.4 screenshot from 2018-02-01 21 59 37 screenshot from 2018-02-01 22 01 18 screenshot from 2018-02-01 22 02 29

Myzhencai commented 6 years ago

@MathGaron hi , thanks for help I finally find that it is the version of numpy that make the issue ,we can not just pip the numpy because the numpy=1.14.0 will make the error . have a nice day

MathGaron commented 6 years ago

Ok good catch, I will document this. Thank you for the update!