Open lmlyzxiaop opened 6 years ago
Sorry, it's my mistake. I fixed it (maybe). Because now I'm writing papers including phd thesis, I'm too busy to check it. However, I'll check it at the latest within this year. thank you!
( maybe I need to adjust my codes because I use neural networks library, chainer, which has been updated recently )
Thanks for your reply,there are also some problems in your codes.I will try to learn chainer and fix your codes.
I have a problem.Is the result write in the margin file when training the model?
And where is the classification result?
For the first question: we calculate scores (margins) only contained in dev and test.
For the second question: The evaluate function in main.py saves the scores (margins). To get result of triplet classification, you need to decide a threshold to classify whether dev/test triplets related with the scores are correct or not.
I'll try to fix this code tonight. sorry for my late response
I have already run your code in my machine,there are two problems in your code.First in the backend.py in array_int and array_float the 'volatile' is not allowed to be used anymore.So I just delete the volatile.Second,in modelI.py in call the dropout is not surpport 'train' ,I relace it with 'with chainer.using_config('train',is_train): x = F.dropout(x,ratio=self.dropout_rate).
The fixs are right. In particular, remove "volatile" in array_float and "train" in dropout to prevent the errors. And if you want to realize them (e.g., for reducing memories), add the following code to dev/test case. with chainer.using_config('train',False), chainer.no_backprop_mode():
Thanks for your suggestions,I will try to add the code.
Maybe I did it. If you download this repository and unzip each datasets, you can run this code by "python main.py" only. (I hope so)
If you want to use new your model, write the model in "models" dir firstly. And then add the model's name and import it in manager.py in "models" dir. So you can select your model using options '-nn your model name'. For example, the code "python main.py -nn X " run model X, if you modified manager.py
Okay, thank you very much. That's awesome
It is a pleasure, asking me! But this update is a rush job. So if you find bugs or errors you cannot solve, please report it. Thank you!
In the margin file,the form is <head,relation,tail,label,score>。Should I use the dev results to find the best threshold and then use it in test to get the classification result?
Yes. In general, dev dataset is used for finding best parameters including model parameters, architecture, and other hyper-parameters. This case is too.
Okay,I got it, I thought you implement the classification function in your codes,but I did not find it. Thank you very much.
I add a script drawing score-history and related examples in draw-score-history dir. You can use the code to check how well your models learn. (I can't explain it in detail now because i'm so sleepy) I did binary classification using thresholds. In particular, if a score is lower than the threshold, I classify corresponding triplet is correct, and vice versa. These procedures are needed, because our model bases on previous score functions, which provides scores only. Our interest is not to modify and improve the score functions, but to deal with OOKB.
Thank you very much to take time answer my questions and adjust your codes.I will read the codes tomorrow.Good night
Me too. I'll go to the bed soon after finishing my tasks... Good night!
I refined the drawing script and added an example script getting thresholds from scores. If you want to use other methods to get thresholds, add methods in the script.
Then, (maybe) I focus on my papers. Thank you!
Thank you very much.
I have a problem in using gpu to run your model. I am sure my gpu is fine and I use python main.py -g ,the speed is no difference from using cpu,that too slow.
To accelerate computation using gpu, we need to use matrix vector calculation as much as possible. In other words, we need to get together calculation in models. But, it is difficult for GNNs, at least I can't. ( complicated or dirty parts in my models are the results of my trials. Perhaps, using more larger batch size can help it only). Although I don't know how slow it is in your settings, using gpu does not accelerate calculation in this implementations so much.
I got it . Thanks for all the things you have done. Thank you!
I find a problem in your dataset ,there is an entity that shown in the test of WN11 but not shown in train
Removing them is the easiest way. But if you want use them, initialize corresponding vectors. As a matter of fact, they do not effect evaluation because such entities are not so much contained. (if I remember correctly)
OK,I'll try.Thank you!
In your code ,the dataset is FB13,I replace the default dataset with dataset/OOKB/both-1000,train,dev,aux,test.And then I run the code ,I get (no such model H) ,could you help me?