darrenyaoyao / ResCNN_RelationExtraction

Deep Residual Learning for Weakly-Supervised Relation Extraction: https://arxiv.org/abs/1707.08866
105 stars 40 forks source link

input_x传值错误请求大神帮助 #1

Open li10141110 opened 6 years ago

li10141110 commented 6 years ago

您好,我跑了train.py,得到如下报错,谷歌了好多没有解决,希望您有空能回复下我,报错如下: /home/lijing/Envs/information_ex/bin/python /home/lijing/Desktop/resre/ResCNN_RelationExtraction-master/ResidualCNN9/train.py

Parameters: ALLOW_SOFT_PLACEMENT=True BATCH_SIZE=64 CHECKPOINT_EVERY=100 DROPOUT_KEEP_PROB=0.5 EMBEDDING_DIM=50 EVALUATE_EVERY=1000 FILTER_SIZES=3 L2_REG_LAMBDA=0.0 LOG_DEVICE_PLACEMENT=False NUM_EPOCHS=200 NUM_FILTERS=128 SEQUENCE_LENGTH=100

WordTotal= 114044 Word dimension= 1 RelationTotal: 53 Start loading training data.

Start loading testing data.

570088 96678 2017-10-09 11:13:25.942678: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-09 11:13:25.943045: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-09 11:13:25.943053: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-10-09 11:13:25.943059: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-09 11:13:25.943065: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. Finish randomize data Start Training Writing to /home/lijing/Desktop/resre/ResCNN_RelationExtraction-master/ResidualCNN9/runs/1507518810

Initialize variables. Batch data Traceback (most recent call last): File "/home/lijing/Desktop/resre/ResCNN_RelationExtraction-master/ResidualCNN9/train.py", line 152, in loss = train_step(x_batch, y_batch, p1_batch, p2_batch) File "/home/lijing/Desktop/resre/ResCNN_RelationExtraction-master/ResidualCNN9/train.py", line 110, in train_step feed_dict) File "/home/lijing/Envs/information_ex/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/home/lijing/Envs/information_ex/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1100, in _run % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (64, 100, 1) for Tensor 'input_x:0', which has shape '(?, 100, 50)'

Process finished with exit code 1

我的环境如下: bleach (1.5.0) html5lib (0.9999999) jieba (0.39) Markdown (2.6.9) network (0.1) nltk (3.2.5) numpy (1.13.3) pip (9.0.1) protobuf (3.4.0) pudb (2017.1.1) Pygments (1.3) scikit-learn (0.19.0) setuptools (36.5.0) six (1.11.0) tensorflow (1.3.0) tensorflow-tensorboard (0.1.7) Werkzeug (0.12.2) wheel (0.30.0)

hsm207 commented 6 years ago

I just forked this repo and made some edits. It runs without errors on tensorflow 1.3.0. You can try my repo and let me know if you run into any problems.

li10141110 commented 6 years ago

@hsm207 hi,how can you run this code without vector1?

li10141110 commented 6 years ago

@hsm207 got it! Thank you very much. And may i have your wechat ID and learning coding from you, pls.

li10141110 commented 6 years ago

@hsm207 my wechat ID lijingx-

pvthuy commented 6 years ago

@hsm207 Did you get the error "ValueError: Cannot feed value of shape (64, 100, 1) for Tensor 'input_x:0', which has shape '(?, 100, 50)'" with the original code?

darrenyaoyao commented 6 years ago

I am sorry that I am so busy that I didn't see the issue. My code was implemented based on tensorflow 1.0. I think the bug is come from the changes of tensorflow version. If you know the detail of the changes, help me to comment here or open a Pull Request, thanks. Or I will check tonight.

hsm207 commented 6 years ago

@li10141110 I downloaded the a glove word embedding file and renamed it as vector1.txt. I've commented this in my repo.

hsm207 commented 6 years ago

@pvthuy no. I never came across that error in the original code.

You got that error because of the problem in the dimensions in your word embedding. Code expects that the word embedding to have 50 dimensions but you are only giving it 1 dimension.

If you did not modify any part of the code, then I suggest you look at the content of your vector1.txt file and make sure it each line is of the form:

word_i dim_1 dim_2 ... dim_50

li10141110 commented 6 years ago

@hsm207 hi,my friend,what's your results(precision and recall )? My result(default hyper parameter) is not very nice(P@100 is 0.66 and recall is 0.05)

hsm207 commented 6 years ago

@li10141110 I don't know...I also ran the model with default hyperparameters for 200 epochs but I don't know how to interpret the results.

Here's the last evaluation file:

re-9-128_precision_recall_201.txt

How did you compute the precision and recall?

li10141110 commented 6 years ago

the precision and recall is output in the command line after your evaluation

li10141110 commented 6 years ago

for example

Correct: 1008 Total: 96678 Accuracy: 0.0104263638056 1761 Precision: 0.0 Recall: 0.0 Precision: 0.821782178218 Recall: 0.0471323111868 Precision: 0.691542288557 Recall: 0.0789324247587 Precision: 0.614617940199 Recall: 0.105053946621 Precision: 0.573566084788 Recall: 0.130607609313 Precision: 0.542914171657 Recall: 0.154457694492 Precision: 0.509151414309 Recall: 0.173764906303 Precision: 0.496433666191 Recall: 0.197614991482 Precision: 0.474406991261 Recall: 0.215786484952 Precision: 0.460599334073 Recall: 0.235661555934 Precision: 0.445554445554 Recall: 0.253265190233 Precision: 0.434150772025 Recall: 0.271436683702 Precision: 0.416319733555 Recall: 0.283929585463 Precision: 0.407378939277 Recall: 0.300965360591 Precision: 0.390435403283 Recall: 0.310618966496 Precision: 0.379080612925 Recall: 0.323111868257 Precision: 0.369768894441 Recall: 0.336172629188 Precision: 0.359788359788 Recall: 0.347529812606

Process finished with exit code 0

hsm207 commented 6 years ago

This is the last output:

Correct: 0 Total: 96678 Accuracy: 0.0 1761 Precision: 0.0 Recall: 0.0 Precision: 0.009900990099009901 Recall: 0.0005678591709256105 Precision: 0.004975124378109453 Recall: 0.0005678591709256105 Precision: 0.006644518272425249 Recall: 0.001135718341851221 Precision: 0.004987531172069825 Recall: 0.001135718341851221 Precision: 0.007984031936127744 Recall: 0.002271436683702442 Precision: 0.008319467554076539 Recall: 0.0028392958546280523 Precision: 0.007132667617689016 Recall: 0.0028392958546280523 Precision: 0.008739076154806492 Recall: 0.003975014196479273 Precision: 0.008879023307436182 Recall: 0.004542873367404884 Precision: 0.008991008991008992 Recall: 0.005110732538330494 Precision: 0.010899182561307902 Recall: 0.0068143100511073255 Precision: 0.009991673605328892 Recall: 0.0068143100511073255 Precision: 0.009992313604919293 Recall: 0.0073821692220329355 Precision: 0.009992862241256246 Recall: 0.007950028392958546 Precision: 0.009327115256495669 Recall: 0.007950028392958546 Precision: 0.009369144284821987 Recall: 0.008517887563884156 Precision: 0.008818342151675485 Recall: 0.008517887563884156

This is the output with the best performance that I could find:

Num_epoch: 17 step 151001, loss 0.198583, acc 0.909 Correct: 1196 Total: 96678 Accuracy: 0.012370963404290532 1761 Precision: 1.0 Recall: 0.0005678591709256105 Precision: 0.6831683168316832 Recall: 0.039182282793867124 Precision: 0.6019900497512438 Recall: 0.06871095968199886 Precision: 0.5714285714285714 Recall: 0.097671777399205 Precision: 0.571072319201995 Recall: 0.13003975014196478 Precision: 0.5349301397205589 Recall: 0.1521862578080636 Precision: 0.49584026622296173 Recall: 0.1692220329358319 Precision: 0.4950071326676177 Recall: 0.1970471323111868 Precision: 0.4893882646691635 Recall: 0.2226007950028393 Precision: 0.4728079911209767 Recall: 0.24190800681431004 Precision: 0.45454545454545453 Recall: 0.25837592277115273 Precision: 0.43869209809264303 Recall: 0.27427597955706984 Precision: 0.42880932556203166 Recall: 0.2924474730266894 Precision: 0.4158339738662567 Recall: 0.30721181147075527 Precision: 0.40756602426837973 Recall: 0.32424758659852354 Precision: 0.4003997335109927 Recall: 0.34128336172629187 Precision: 0.3910056214865709 Recall: 0.35547984099943214 Precision: 0.3838918283362728 Recall: 0.37081203861442363

Result looks really strange to me.

li10141110 commented 6 years ago

@hsm207 I have the same result as yours and my best performance is followed

Correct: 1008 Total: 96678 Accuracy: 0.0104263638056 1761 Precision: 0.0 Recall: 0.0 Precision: 0.821782178218 Recall: 0.0471323111868 Precision: 0.691542288557 Recall: 0.0789324247587 Precision: 0.614617940199 Recall: 0.105053946621 Precision: 0.573566084788 Recall: 0.130607609313

It's not easy for me to under his code~~

pvthuy commented 6 years ago

I have the same question. How to explain the results? Thanks in advanced!

hsm207 commented 6 years ago

Maybe there are 18 relations in the dataset and the file is reporting the precision and recall for reach relation?

Ideally, the precision after training the model for 100 epochs and using the default parameters should be about the same as the model ResCNN-9 in the paper's Table 2, right?

pvthuy commented 6 years ago

You meant 18 relations in the testing data?

hsm207 commented 6 years ago

@pvthuy Yes, I meant testing data has 18 relations.

li10141110 commented 6 years ago

@hsm207 @pvthuy there are 53 relations in the dataset

pvthuy commented 6 years ago

@li10141110 Yes. Actually, there are 56 relations as I counted from the training data. But alright let's consider that we have 53 relations including the "NA" label (negative examples). BTW, how do you interpret the output file?

li10141110 commented 6 years ago

@pvthuy I think everything it's OK. For example Correct: 1008 Total: 96678 Accuracy: 0.0104263638056 1761 Precision: 0.0 Recall: 0.0 Precision: 0.821782178218 Recall: 0.0471323111868 Precision: 0.691542288557 Recall: 0.0789324247587 Precision: 0.614617940199 Recall: 0.105053946621 Precision: 0.573566084788 Recall: 0.130607609313

here 'Precision: 0.0 Recall: 0.0' meant the first prediction result is just != groundtruth,but the next 100 results have some right pridictions so we get 'Precision: 0.821782178218 Recall: 0.0471323111868'

li10141110 commented 6 years ago

do you guys have wechat?

li10141110 commented 6 years ago

I have asked the author a question about my result in wechat

li10141110 commented 6 years ago

@pvthuy alright.

DarkyMay commented 6 years ago

[@li10141110 I am also puzzled by the results. Could you send me the details of the results? my email is: daiyuan95@foxmail.com Thx!

charlesfufu commented 5 years ago

@pvthuy alright.

Cannot understand the result,can you explain the result more detail ? Thank you very much !!