Open krasi0 opened 4 years ago
I've just tried running the Docker example in https://github.com/KhalilMrini/LAL-Parser#inference on my side. Everything works fine. Did you run the same commands or did you change anything? Also, why do you say "I am using benepar[cpu] and tensorflow==2.0.0b1 since tensorflow==2.0.0 could not be found."? Did you get some error messages other than RuntimeError: The size of tensor a (39) must match the size of tensor b (35) at non-singleton dimension 0
?
FYI:
root@4a133a3a9d40:/LAL-Parser# pip freeze
absl-py==0.9.0
asn1crypto==0.24.0
astor==0.8.1
benepar==0.1.2
boto3==1.14.2
botocore==1.17.2
certifi==2020.4.5.2
chardet==3.0.4
click==7.1.2
cryptography==2.1.4
Cython==0.29.20
dataclasses==0.7
docutils==0.15.2
filelock==3.0.12
gast==0.3.3
gdown==3.11.1
google-pasta==0.2.0
grpcio==1.29.0
h5py==2.10.0
idna==2.6
importlib-metadata==1.6.1
jmespath==0.10.0
joblib==0.15.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
keyring==10.6.0
keyrings.alt==3.0
Markdown==3.2.2
nltk==3.5
numpy==1.18.5
protobuf==3.12.2
pycrypto==2.6.1
pygobject==3.26.1
PySocks==1.7.1
python-dateutil==2.8.1
pytorch-pretrained-bert==0.6.2
pyxdg==0.25
regex==2020.6.8
requests==2.23.0
s3transfer==0.3.3
sacremoses==0.0.43
SecretStorage==2.3.1
sentencepiece==0.1.83
six==1.15.0
tensorboard==2.0.0
tensorboardX==2.0+022f060
tensorflow-estimator==1.14.0
tensorflow-gpu==1.14.0
termcolor==1.1.0
tokenizers==0.5.2
torch==1.1.0
tqdm==4.45.0
transformers==2.8.0
urllib3==1.25.9
Werkzeug==1.0.1
wrapt==1.12.1
zipp==3.1.0
@Franck-Dernoncourt thanks for your quick reply!
Some more info: I am testing on a machine without a decent GPU so I have to use the CPU only. That's why I've installed benepar[cpu]
. As for tensorflow 2.0.0
, this time I started off from another docker image (tensorflow/tensorflow
) and I seem to be using the correct version:
root@b73f4cb64ace:~/LAL-Parser# pip freeze absl-py==0.9.0 asn1crypto==0.24.0 astor==0.8.1 astunparse==1.6.3 benepar==0.1.2 boto3==1.14.2 botocore==1.17.2 cachetools==4.1.0 certifi==2020.4.5.2 chardet==3.0.4 click==7.1.2 cryptography==2.1.4 Cython==0.29.20 dataclasses==0.7 docutils==0.15.2 filelock==3.0.12 gast==0.2.2 gdown==3.11.1 google-auth==1.17.2 google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.29.0 h5py==2.10.0 idna==2.9 importlib-metadata==1.6.1 jmespath==0.10.0 joblib==0.15.1 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.2 keyring==10.6.0 keyrings.alt==3.0 Markdown==3.2.2 nltk==3.5 numpy==1.18.5 oauthlib==3.1.0 opt-einsum==3.2.1 protobuf==3.12.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycrypto==2.6.1 pygobject==3.26.1 PySocks==1.7.1 python-dateutil==2.8.1 pytorch-pretrained-bert==0.6.2 pyxdg==0.25 regex==2020.6.8 requests==2.23.0 requests-oauthlib==1.3.0 rsa==4.6 s3transfer==0.3.3 sacremoses==0.0.43 scipy==1.4.1 SecretStorage==2.3.1 sentencepiece==0.1.83 six==1.15.0 tensorboard==2.0.2 tensorboard-plugin-wit==1.6.0.post3 tensorboardX @ git+https://github.com/lanpa/tensorboardX@022f060f9438c4f5e71880bccb474671f5db4450 tensorflow==2.0.0 tensorflow-estimator==2.0.1 termcolor==1.1.0 tokenizers==0.5.2 torch==1.1.0 tqdm==4.45.0 transformers==2.8.0 urllib3==1.25.9 Werkzeug==1.0.1 wrapt==1.12.1 zipp==3.1.0
But this time I am getting a different error:
root@b73f4cb64ace:~/LAL-Parser# source parse_quick.sh Not using CUDA! Loading model from best_parser.pt... /usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:46: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead. warnings.warn(warning.format(ret)) Parsing sentences... Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s]Illegal instruction (core dumped)
Can you try:
docker run --interactive --tty ubuntu:18.04 bash
apt update; apt install -y git nano wget htop python3 python3-pip unzip; git clone https://github.com/KhalilMrini/LAL-Parser
cd LAL-Parser/
alias pip=pip3; source requirements.sh
apt-get install -y libhdf5-serial-dev=1.8.16+docs-4ubuntu1.1
# Testing the Neural Adobe-UCSD Parser inference
alias python=python3
source parse.sh
?
Yeah, without any changes to the requirements.sh script, I get the GPU versions of benepar and tensorflow which doesn't work here. I need the CPU ones. This is the error that I get again:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Not using CUDA!
/usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /LAL-Parser/src_joint/hpsg_decoder.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
In file included from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1832:0,
from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/arrayobject.h:4,
from /root/.pyxbld/temp.linux-x86_64-3.6/pyrex/hpsg_decoder.c:606:
/usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with " \
^~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/hpsg_decoder.c: In function '__pyx_pf_12hpsg_decoder_decode.isra.12':
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/hpsg_decoder.c:5688:15: warning: '__pyx_v_root_head' may be used uninitialized in this function [-Wmaybe-uninitialized]
__pyx_t_136 = __pyx_v_root_head;
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /LAL-Parser/src_joint/const_decoder.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
In file included from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1832:0,
from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/arrayobject.h:4,
from /root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:606:
/usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with " \
^~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c: In function '__pyx_pf_13const_decoder_decode.isra.9':
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:3105:20: warning: '__pyx_v_oracle_label_index' may be used uninitialized in this function [-Wmaybe-uninitialized]
__pyx_t_35 = __pyx_v_oracle_label_index;
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:1366:91: warning: '__pyx_pybuffernd_oracle_split_chart.diminfo[1].strides' may be used uninitialized in this function [-Wmaybe-uninitialized]
#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1)
^
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:2079:21: note: '__pyx_pybuffernd_oracle_split_chart.diminfo[1].strides' was declared here
__Pyx_LocalBuf_ND __pyx_pybuffernd_oracle_split_chart;
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:3391:40: warning: '__pyx_pybuffernd_oracle_split_chart.diminfo[1].shape' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (__pyx_t_45 < 0) __pyx_t_45 += __pyx_pybuffernd_oracle_split_chart.diminfo[1].shape;
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:2079:21: warning: '__pyx_pybuffernd_oracle_split_chart.diminfo[0].strides' may be used uninitialized in this function [-Wmaybe-uninitialized]
__Pyx_LocalBuf_ND __pyx_pybuffernd_oracle_split_chart;
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/.pyxbld/temp.linux-x86_64-3.6/pyrex/const_decoder.c:2077:21: warning: '__pyx_pybuffernd_oracle_label_chart.diminfo[0].strides' may be used uninitialized in this function [-Wmaybe-uninitialized]
__Pyx_LocalBuf_ND __pyx_pybuffernd_oracle_label_chart;
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Loading model from best_parser.pt...
Downloading: 100%|###################################################################################| 798k/798k [00:02<00:00, 368kB/s]
Downloading: 100%|#####################################################################################| 761/761 [00:00<00:00, 266kB/s]
Downloading: 83%|##################################################################3 | 1.20G/1.44G [14:22<03:00, 1.36MB/s]Downloading: 100%|################################################################################| 1.44G/1.44G [17:03<00:00, 1.41MB/s]
/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:46: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
Parsing sentences...
Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s]Illegal instruction (core dumped)
Note that I've successfully used other BERT based models in this way in the past.
Same here, when I use Docker, everything runs on CPU.
Can try to run the following single command:
docker run --interactive --tty ubuntu:18.04 \
bash -c '
shopt -s expand_aliases;
apt update;
apt install -y git nano wget htop python3 python3-pip unzip;
git clone https://github.com/KhalilMrini/LAL-Parser;
cd LAL-Parser/;
alias pip=pip3;
source requirements.sh;
apt-get install -y libhdf5-serial-dev;
alias python=python3;
source parse.sh;
exec bash
'
and see whether it works? Works for me:
[...]
Parsing sentences: 100%|################################################################################################################################################################################| 1/1 [00:02<00:00, 2.80s/it]
Output written to: output_synconst
Output written to: output_syndephead
Output written to: output_syndeplabel
@Franck-Dernoncourt It again died with Illegal instruction (core dumped)
. :(
Now, I will try with updating docker first to the latest version since the one that I have is slightly old at 19.03.0
.
Another issue is that every attempt takes ages (while downloading the models and python packages) since my Internet connection here is very slow.
So could you please somehow export your final docker image and upload it somewhere, so that I could use it in offline mode (without access to the internet after the first download)?
After some reading, it could also be CPU-specific. This is the output that I get from inside the docker instance:
root@fb2a5d9a45b7:/# grep flags -m1 /proc/cpuinfo | cut -d ":" -f 2 | tr '[:upper:]' '[:lower:]' | { read FLAGS; OPT="-march=native"; for flag in $FLAGS; do case "$flag" in "sse4_1" | "sse4_2" | "ssse3" | "fma" | "cx16" | "popcnt" | "avx" | "avx2") OPT+=" -m$flag";; esac; done; MODOPT=${OPT//_/\.}; echo "$MODOPT"; } -march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mavx -mavx2
What about you?
A slightly different but basically the same error with docker v19.03.11:
Loading model from best_parser.pt...
Downloading: 100%|###################################################################################| 798k/798k [00:01<00:00, 448kB/s]
Downloading: 100%|#####################################################################################| 761/761 [00:00<00:00, 718kB/s]
Downloading: 100%|#################################################################################| 1.44G/1.44G [28:58<00:00, 829kB/s]
/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:46: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
Parsing sentences...
Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s]
parse.sh: line 11: 7372 Illegal instruction (core dumped) python3 src_joint/main.py parse --dataset ptb --save-per-sentences 1000 --eval-batch-size 50 --input-path example_sentences.txt --output-path-synconst output_synconst --output-path-syndep output_syndephead --output-path-synlabel output_syndeplabel --embedding-path data/glove.gz --model-path-base best_parser.pt
Illegal instruction (core dumped).
can sometimes be caused by running out of RAM: can you check whether you're hitting your RAM limit?
dernoncourt@ilcomp0:~$ docker -v
Docker version 18.03.0-ce, build 0520e24
root@4ffa4e2345d1:/# grep flags -m1 /proc/cpuinfo | cut -d ":" -f 2 | tr '[:upper:]' '[:lower:]' | { read FLAGS; OPT="-march=native"; for flag in $FLAGS; do case "$flag" in "sse4_1" | "sse4_2" | "ssse3" | "fma" | "cx16" | "popcnt" | "avx" | "avx2") OPT+=" -m$flag";; esac; done; MODOPT=${OPT//_/\.}; echo "$MODOPT"; }
-march=native -mssse3 -mfma -mcx16 -msse4.1 -msse4.2 -mpopcnt -mavx -mavx2
You can save the Docker image before running source parse.sh;
to test on your side.
The machine has more than 20 GB of RAM. It doesn't even hit the swap when I run the parse.sh script.
The only difference that I notice is the lack of -mfma
in my cpuinfo
output.
I may have to build tensorflow from scratch as a last resort. Nothing else comes to mind... :(
Very interesting. So long with Docker being reproducible :-/ At that point looks like you'll have to look at which line in python3 src_joint/main.py parse
causes the issue, and then ask tensorflow or docker people what's going on. Or as you were considering, trying different tensorflow versions while praying the code still runs, or build tensorflow from scratch.
Otherwise you could try to run the code in a virtual Python environment or an old-fashioned virtual machine.
Sorry about that! Please keep us posted how it goes, I'm curious. Thanks for your patience!
Sure! Thanks for your trying to help! Much appreciated. BTW, I tried the Python virtual env approach and still get the same error. :( There must be something in the default tensorflow build that doesn't like my CPU (AMD Ryzen 9 3900X)...
I am at a loss. Compiled TF from source (it took ages) and still getting:
source parse_quick.sh
Not using CUDA!
Loading model from best_parser.pt...
/home/krasi0/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:46: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
Parsing sentences...
Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s]
Illegal instruction (core dumped)
Could it be a broken download of the big model? What's your md5 checksum?
$ md5sum best_parser.pt
e5f5c7758d0ba9cd63bf16c5ffe2b5d6 best_parser.pt
Additionally, @Franck-Dernoncourt could you upload somewhere the contents of
output_synconst, output_syndephead, output_syndeplabel
. I'd like to have a look at the generated parses.
Same hash here:
docker run --interactive --tty ubuntu:18.04 bash
apt update; apt install -y git nano wget htop python3 python3-pip unzip; git clone https://github.com/KhalilMrini/LAL-Parser
cd LAL-Parser/
alias pip=pip3; source requirements.sh
md5sum best_parser.pt
e5f5c7758d0ba9cd63bf16c5ffe2b5d6 best_parser.pt
output_synconst_0.txt
:
(S (NP (EX There)) (VP (VBZ is) (NP (DT a) (JJ small) (NN blue) (NN car)) (PP (IN near) (NP (DT the) (NN house)))) (. .))
(S (NP (DT The) (NN man)) (VP (VBZ is) (VP (VBG running) (PP (IN on) (NP (DT the) (NN mountain))))) (. .))
(S (NP (PRP I)) (VP (VBP ate) (NP (NP (DT the) (NNS blueberries) (CC and) (NNS apples)) (SBAR (WHNP (IN that)) (S (NP (PRP I)) (VP (VBD purchased) (NP (NN yesterday) (. .))
output_syndephead_0.txt
:
[2, 0, 6, 6, 6, 2, 2, 9, 7, 2]
[2, 4, 4, 0, 4, 7, 5, 4]
[2, 0, 4, 2, 4, 4, 9, 9, 4, 9, 2]
output_syndeplabel_0.txt
:
['expl', 'root', 'det', 'amod', 'nn', 'nsubj', 'prep', 'det', 'pobj', 'punct']
['det', 'nsubj', 'aux', 'root', 'prep', 'det', 'pobj', 'punct']
['nsubj', 'root', 'det', 'dobj', 'cc', 'conj', 'dobj', 'nsubj', 'rcmod', 'tmod', 'punct']
Looks like you might have to narrow down which tensorflow command is causing the issue and report it to the tensorflow tech support. Or you could try in a virtual machine in case it masks whatever tensorflow doesn't enjoy in your hardware configuration.
@KhalilMrini thanks for your great work! Could the following error be due to a recently changed dependency: Note: I am using benepar[cpu] and tensorflow==2.0.0b1 since tensorflow==2.0.0 could not be found.
$ source parse_quick.sh /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Not using CUDA! Loading model from best_parser.pt... /home/user/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead. warnings.warn(warning.format(ret)) Parsing sentences... Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s]/pytorch/aten/src/ATen/native/LegacyDefinitions.cpp:29: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. /pytorch/aten/src/ATen/native/TensorAdvancedIndexing.cpp:543: UserWarning: maskedfill received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. /pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. Parsing sentences: 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "src_joint/main.py", line 788, in
main()
File "src_joint/main.py", line 784, in main
args.callback(args)
File "src_joint/main.py", line 711, in runparse
syntree, = parser.parse_batch(tagged_sentences)
File "/home/user/LAL-Parser/src_joint/KM_parser.py", line 1771, in parse_batch
annotations, self.current_attns = self.encoder(emb_idxs, pre_words_idxs, batch_idxs, extra_content_annotations=extra_content_annotations)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, *kwargs)
File "/home/user/LAL-Parser/src_joint/KM_parser.py", line 1170, in forward
res, current_attns = attn(res, batch_idxs)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(input, **kwargs)
File "/home/user/LAL-Parser/src_joint/KM_parser.py", line 404, in forward
return self.layer_norm(outputs + residual), attns_padded
RuntimeError: The size of tensor a (39) must match the size of tensor b (35) at non-singleton dimension 0