Open zhenwwang opened 5 years ago
Hi,
Thanks for reporting the issues.
I have just pushed a commit that fixed some minor issues occurred on my end. Pleased give it a shot. The [ERROR 1] will be gone now.
[ERROR 2] did not happen on my end. I cleaned up all preprocessed data and restarted it over from scratch. Still got no such error. It seems when saving the variable [all_source] in preprocess.py, the format is already np.int (at line 152 preprocess.py). So when loading in data.py, it should stick to int as default. Maybe you are using a newer version of pytorch that changed the default behavior? I am using pytorch 1.0.1.post2.
You might want to try something like this in line 29-33 in data.py
self.all_source = torch.from_numpy(self.all_source.astype(np.int32))
self.all_target = torch.from_numpy(self.all_target.astype(np.int32))
self.source = torch.from_numpy(self.source.astype(np.int32))
self.target = torch.from_numpy(self.target.astype(np.int32))
self.label = torch.from_numpy(self.label.astype(np.int32))
In your solution, I suggest you to avoid torch.uint8 which is insufficient for token indices.
Just made another push that forces indices to be long format.
My hypothesis is that my numpy int is by default int64, so it worked on my end. Sometimes np.int can be defaulted to int32. And pytorch wants indices to be int64/long.
Please pull again and give it a shot.
@t-li Thanks for your reply.
I just pulled again and solved these problems.
It works well now.
Thanks.
I got this error in your program. I didn't understand please HELP thanks in advance !!
Traceback (most recent call last):
File "train.py", line 305, in
[ERROR 01] File "./classifier\local_classifier.py", line 10, in
from backward_hooks import *
ModuleNotFoundError: No module named 'backward_hooks'
I just comment this line,then the second error occured
[ERROR 02] File "C:\Users\Wang\Desktop\8_12\layer_augmentation-master\data.py", line 251, in getitem char1 = self.char_idx[all_source.contiguous()].view(batch_l, source_l, token_l) IndexError: tensors used as indices must be long, byte or bool tensors
I fixed this,by add .type(torch.uint8) like this "char1=self.char_idx[all_source.contiguous().view(-1).type(torch.uint8)].view(batch_l, source_l, token_l)"
but errors continued:
[ERROR 03] File "C:\Users\Wang\Desktop\8_12\layer_augmentation-master\data.py", line 252, in getitem char1 = self.char_idx[all_source.contiguous().view(-1).type(torch.uint8)].view(batch_l, source_l, token_l) IndexError: The shape of the mask [320] at index 0 does not match the shape of the indexed tensor [34292, 16] at index 0
What should I do?