Closed thammegowda closed 5 years ago
Looks like this is related to https://discuss.pytorch.org/t/torch-from-numpy-not-support-negative-strides/3663
I also get the exact same error, even on the CPU. I'm using Python 2.7 and Torch 1.0 without CUDA.
+1 on CPU, Python 3.6, torch==1.0.0
any resolution to stride error?
wait for this PR to be merged: https://github.com/facebookresearch/InferSent/pull/100
or
alter models.py manually - add this line sent_len_sorted = sent_len_sorted.copy()
here:
# Sort by length (keep idx)
sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len)
sent_len_sorted = sent_len_sorted.copy()
idx_unsort = np.argsort(idx_sort)
It worked thanks
Sent from my iPhone
On Dec 11, 2018, at 6:49 AM, alexander ostrikov notifications@github.com wrote:
wait for this PR to be merged: #100 or alter models.py manually - add this line sent_len_sorted = sent_len_sorted.copy() here:
# Sort by length (keep idx) sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len) sent_len_sorted = sent_len_sorted.copy() idx_unsort = np.argsort(idx_sort)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Merged PR. Thanks!
This error is still occuring despite being merged. Can anyone help me finding solution to this problem?
@sashaostr still not work with the latest version of this rep
@fooSynaptic I didn't tried the latest version, I've added the fix manually and it worked for me. The PR made the same, so it's strange, but anyway manual fix worked:
wait for this PR to be merged: #100 or alter models.py manually - add this line
sent_len_sorted = sent_len_sorted.copy()
here:# Sort by length (keep idx) sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len) sent_len_sorted = sent_len_sorted.copy() idx_unsort = np.argsort(idx_sort)
@sashaostr thanks, i modified a lot of the source code and it works for me. I believe its the data process issue for i use different dataset.
@kaumilturabit, I added copy() like this and it works
sent_len,idx_sort = np.sort(sent_len)[::-1].copy(), np.argsort(-sent_len)
wait for this PR to be merged: #100 or alter models.py manually - add this line
sent_len_sorted = sent_len_sorted.copy()
here:# Sort by length (keep idx) sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len) sent_len_sorted = sent_len_sorted.copy() idx_unsort = np.argsort(idx_sort)
Thanks for the answer and also please change sent_len to sent_len_sorted in the later part of the code. (In case, someone missed that, like me :))
Convert the numpy array to np.int16 or np.int32 before converting it to tensor.
This also works for me, thanks very much.
sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len)
sent_len_sorted = sent_len_sorted.copy()
idx_unsort = np.argsort(idx_sort)
Trying to use pretrained InferSent2 Model ( i.e. fastext) to encode sentences. The pretrained model works perfectly on CPU (where I have torch=0.4.1) However, it crashes with cuda backend (where I have torch 1.0.0.dev20181017)