AntNLP / gnn-dep-parsing

Apache License 2.0
32 stars 7 forks source link

[Error] File "_dynet.pyx", line 1907, in _dynet.Expression.__mul__ NotImplementedError #7

Closed Jivnesh closed 3 years ago

Jivnesh commented 3 years ago

I was trying to run this model on my custom data. I got stuck with this error. I am not sure whether is this due to dynet version issue. I am attaching the error log below. It will be great if you can direct me to the possible reason for this error and some directions to fix it. It will be great! Thanks in advance. I have installed dynet=2.0 with Eigen given by you in the latest issue. My GPU RAM is 11GB with cuda 9.2

Loaded config file successfully.
DYNET_MEM 512
DYNET_SEED 666
DATA_DIR ../san_data
TRAIN ../san_data/train.conll
DEV ../san_data/dev.conll
TEST ../san_data/test.conll
GLOVE ../san_data/cc.sanskrit.300.new.vec
CKPT_DIR ../ckpts/default
BEST_FILE ../ckpts/default/best.model
LAST_FILE ../ckpts/default/last.model
LOG_FILE ../ckpts/default/exper.log
PRED_DEV ../ckpts/default/dev.pred
PRED_TEST ../ckpts/default/test.pred
WARM 800
CHAR_DIM 0
WIN_SIZES [3,]
N_FILTER 100
LR 0.002
ADAM_BETA1 0.9
ADAM_BETA2 0.9
TRAIN_BATCH_SIZE 16
TEST_BATCH_SIZE 16
EPS 1e-12
MAX_ITER 150000
VALID_ITER 800
LR_DECAY 0.75
LR_ANNEAL 8000
WORD_DIM 300
WORD_DROP 0.33
TAG_DIM 100
TAG_DROP 0.33
CHAR_DROP 0.33
RNN_DROP 0.33
ENC_LAYERS 3
ENC_H_DIM 400
ARC_MLP_SIZE [800, 500]
REL_MLP_SIZE [800, 100]
MLP_BIAS True
MLP_DROP 0.33
ARC_DROP 0.2
GRAPH_LAYERS 1
LAMBDA1 0.8
LAMBDA2 1
[dynet] initializing CUDA
Request for 1 GPU ...
[dynet] Device Number: 0
[dynet]   Device name: GeForce RTX 2080 Ti
[dynet]   Memory Clock Rate (KHz): 7000000
[dynet]   Memory Bus Width (bits): 352
[dynet]   Peak Memory Bandwidth (GB/s): 616
[dynet]   Memory Free (GB): 10.4048/11.5514
[dynet]
[dynet] Device(s) selected: 0
[dynet] random seed: 666
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
../san_data/cc.sanskrit.300.new.vec
../san_data/cc.sanskrit.300.new.vec
Orthogonal pretrainer (400, 800) loss: 4.24e-30
Orthogonal pretrainer (400, 800) loss: 4.23e-30
Orthogonal pretrainer (400, 1200) loss: 3.47e-30
Orthogonal pretrainer (400, 1200) loss: 3.40e-30
Orthogonal pretrainer (400, 1200) loss: 3.35e-30
Orthogonal pretrainer (400, 1200) loss: 3.49e-30
Orthogonal pretrainer (500, 800) loss: 4.70e-29
Orthogonal pretrainer (100, 800) loss: 1.39e-22
Orthogonal pretrainer (500, 800) loss: 4.52e-29
Orthogonal pretrainer (100, 800) loss: 6.68e-23
Orthogonal pretrainer (500, 500) loss: 5.05e-09
Orthogonal pretrainer (500, 500) loss: 7.33e-03
Orthogonal pretrainer (500, 500) loss: 7.58e-07
Orthogonal pretrainer (500, 500) loss: 7.82e-09
Orthogonal pretrainer (100, 100) loss: 3.01e-10
Orthogonal pretrainer (100, 100) loss: 1.55e-10
Orthogonal pretrainer (100, 100) loss: 6.97e-11
Orthogonal pretrainer (100, 100) loss: 4.70e-12
09-18 21:10 - INFO - Experiment name: experiment
09-18 21:10 - INFO - Git SHA: 3fcd14
Traceback (most recent call last):
  File "train.py", line 169, in <module>
    main()
  File "train.py", line 110, in main
    loss, part_loss = decoder(vectors, masks, truth, cnt_iter, True, True)
  File "/media/guest/WorkSpace/Depling/gnn-dep-parsing/Parser_src/src/models/graph_nn_decoder.py", line 70, in __call__
    head_arc = self.head_arc_MLP(X, is_train)
  File "/home/jivnesh/anaconda3/envs/GNN_Dep/lib/python3.6/site-packages/antu/nn/dynet/modules/perceptron.py", line 39, in __call__
    return self.W[-1]*h + (self.b[-1] if self.bias else 0)
  File "_dynet.pyx", line 1907, in _dynet.Expression.__mul__
NotImplementedError
Jivnesh commented 3 years ago

This error can be reproduced using the following steps. I have created a conda environment with the following specifications. GPU: GeForce RTX 2080 Ti with Cuda 9.2 & python=3.6.13

pip install antu==0.0.5a0
pip install bidict==0.19.0
pip install overrides==3.0.0
pip install cython==0.29.17
pip install numpy==1.18.2
mkdir dynet-base
cd dynet-base

Download dynet==2.0.3 from here and eigen from here

Jivnesh commented 3 years ago

I solved it. Solution is use dynet=2.1 instead of using dynet=2.0.3