Closed visonpon closed 5 years ago
sure, I'll add proper docs shortly. In the meantime you can get the processed NYU depth .csv files like so
cd relative_depth
wget https://vl-lab.eecs.umich.edu/data/nips2016/data.tar.gz
tar -xzf data.tar.gz
rm data.tar.gz
The script to convert them to my format is actually a notebook called prepare_labels.ipynb
. It's in the repo.
@A-Jacobson thanks~ i also want to know how to use my own data to produce the **.csv file,. since this file contain the data and label . Can you provide some material about this~
Ya, that's from the authors of the paper. it contains 5 rows, xA, xB, yA, yB and relative depth. these are the coordinates of two pixels in an image and the depth between them (-1, 0 ,1). You can train with one set of labeled pixels for image but there isn't much singal if you do that. It's better to train with NYU depth or something that's densely labeled (NYU depth has 800 labeled points per image when converted to this format) then fine tune on your own sparsely labeled data.
@A-Jacobson I got an indexError when i train:
z_A = output[index][0][x_A, y_A] # all "A" points
IndexError: When performing advanced indexing the indexing objects must be LongTensors or convertible to LongTensors
do you know why ?
Possibly, are you training on gpu? On Wed, Nov 1, 2017 at 7:39 PM visonpon notifications@github.com wrote:
@A-Jacobson https://github.com/a-jacobson I got an indexError when i train: z_A = output[index][0][x_A, y_A] # all "A" points IndexError: When performing advanced indexing the indexing objects must be LongTensors or convertible to LongTensors
do you know why ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/A-Jacobson/Depth_in_The_Wild/issues/1#issuecomment-341301162, or mute the thread https://github.com/notifications/unsubscribe-auth/AI-bzOMQQZq3JE53_Y0UeiNm__X9GXmOks5syStJgaJpZM4QJ26_ .
yeah @A-Jacobson
Strange, can you give me the full stack trace or check the the type of the index variable when the error is thrown? On Wed, Nov 1, 2017 at 8:31 PM visonpon notifications@github.com wrote:
yeah @A-Jacobson https://github.com/a-jacobson
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/A-Jacobson/Depth_in_The_Wild/issues/1#issuecomment-341308192, or mute the thread https://github.com/notifications/unsubscribe-auth/AI-bzB828fwhOG-fc8YtCoCTod-soP0kks5syTeBgaJpZM4QJ26_ .
I’m sorry, check x_A, and y_A. These should be long tensors doing the advanced indexing. If you’re using your own data it’s possible you changed the format. Are you using multiple points per image? On Wed, Nov 1, 2017 at 8:33 PM Austin Jacobson jacobsonaustinj@gmail.com wrote:
Strange, can you give me the full stack trace or check the the type of the index variable when the error is thrown? On Wed, Nov 1, 2017 at 8:31 PM visonpon notifications@github.com wrote:
yeah @A-Jacobson https://github.com/a-jacobson
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/A-Jacobson/Depth_in_The_Wild/issues/1#issuecomment-341308192, or mute the thread https://github.com/notifications/unsubscribe-auth/AI-bzB828fwhOG-fc8YtCoCTod-soP0kks5syTeBgaJpZM4QJ26_ .
@A-Jacobson thanks for your patient answer~
I have modified the z_A = output[index][0][x_A, y_A]
into
z_A = output[index][0][x_A][y_A]
and this error disappeared.
yes , I use multiple points per images .
btw, I notice the script that compute the loss:
log_loss = torch.mean(torch.log(1 + torch.exp(-target[target != 0] * pred_depth[target != 0])))
and at this line i got size do not match error.
I have debugged it , it reported the operation pred_depth[target != 0] and *target[target != 0] pred_depth[target != 0]** trigger this error.
the full trace error as follows:
Train on 745 samples
Traceback (most recent call last):
File "/home//train.py", line 55, in
z_A = output[index][0][x_A, y_A] and z_A = output[index][0][x_A][y_A] are not the same thing which is likely why your sizes arent matching. can you check that x_A isn't none and is a longtensor?
I modify like you said and encounter the indexing error again as excepted.
x_A,x_B,y_A,y_B are longtensor and their value are indeed corresspond to the target.
the error trace file:
Train on 745 samples
Traceback (most recent call last):
File "/home//train.py", line 55, in
I try to index the output using single coordinate(a,b) instead of a Longtensor(eg. x_A), everything is normal. I think it might because it can't index according to a mutil-demention LongTensor ?
The fastest way to figure this out would be to run training with NYU depth, see if it works. If so, check x_a and x_b, make sure the types, dimensionality are the same as with your dataset. Without seeing your data it’s hard for me to help beyond that.
The only other thing I can think of is that fancy indexing was added to pytorch recently. Check that you’re using the latest version. On Thu, Nov 2, 2017 at 4:17 AM visonpon notifications@github.com wrote:
I try to index the output using single coordinate(a,b) instead of a Longtensor(eg. x_A), everything is normal. I think it might because it can't index according to a mutil-demention LongTensor ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/A-Jacobson/Depth_in_The_Wild/issues/1#issuecomment-341390834, or mute the thread https://github.com/notifications/unsubscribe-auth/AI-bzJB4NqcLvB9zeBk_sDoDx9G34bAcks5syaSygaJpZM4QJ26_ .
thanks~ @A-Jacobson :+1:
Hi , i notice that the image and label are loaded use NTUDepth function , can you show the data form ? since i want to use my own data to train . and can you share the script to generate the the label file? thanks~
train = NYUDepth(path+'train', path+'labels_train.pkl', transforms=transforms.ToTensor())