Closed spartian closed 5 years ago
Hi @spartian, thanks! (I read your email too, answering here.)
In the Yahoo Answers dataset, there is only training and testing data; there is no validation data.
Therefore, I've now removed the validation function from train.py and added eval.py to evaluate the trained model (from train.py) on the testing data. So you will need to use eval.py for testing.
If your dataset has all three splits (training, validation, testing), then re-add the validation function in train.py for early-stopping of training, and then evaluate the trained model on the testing data using eval.py.
classify.py is simply for inference on single documents and to visualize the attention. It is separate from the training and testing process, and its purpose is to use the trained model for its ultimate purpose – classifying documents.
Note: I've using PyTorch 0.4 in this repo, which is sort of outdated. I think the latest versions make it easier to handle Packed Sequences, etc. For example, I don't think manual sorting by sequence length is necessary any more. When I complete this tutorial, I will update the code.
Hey Sagar,
Thank you so much for replying and addressing my confusions. So, basically, as I understand from your comment, now for training purpose train.py is used and for testing purpose eval.py is used. I have a question here. Suppose now, I am passing live RSS feed news to your model in hopes of classifying it, then would I pass it to eval.py or classify.py?
What I intend to do is train your model on specific categories of news like Sports,Politics,Entertainment and then pass live feed news and categorize those news into Sports, Politics and Entertainment. How would I get only Sports,Politics and Entertainment news from live feed is another story. So what file should I run it on ? Classify.py or eval.py?
Note : I am talking about Classify.py because in your comment you mentioned that Classify.py ultimately is used for classifying documents using the training model. So now isn't eval.py is also doing the same?
You are right that both files are classifying documents. But the context or reason for the classification are different.
The objective of eval.py is only evaluation of the model's performance, i.e. compute the metrics/statistics of how well the model is performing on average. We're not interested in the actual classifications/categories of the individual documents. We're only interested in the average accuracy over the entire test dataset. Also, evaluation is run only once at the end of the training process just for the sake of being able to report the model's performance.
The objective of classify.py is inference. Here, we're actually using the model to perform its end goal of classifying documents because we want to know the category/classification label of each document. In addition, we can also visualize the attention of the model to the words/sentences in the document with the included visualization function.
The conditions or requirements of evaluation and inference can be different. For instance, during evaluation, there's no need to convert predictions into their final human-interpretable form. Also, evaluation requires prediction in batches for quick evaluation on the test data, whereas people might want to perform inference on individual samples, or one at a time, in practice. During inference, the final predictions are often also presented in a special way, such as with a visualization.
In my repos, I usually include one file for evaluation and one file for inference. For example, in my image captioning tutorial, you will find eval.py and caption.py. In my object detection tutorial, you will find eval.py and detect.py.
You can decide to use either of these two files for accomplishing your task, by adapting it to the most convenient form for your particular use-case. If you're using the model to classify one document at a time, you can use classify.py in its current form. If not, you can modify it to handle batches of documents, similar to how it's done in eval.py.
In a nutshell, although both files are performing classification,
Thank you for such a detailed answer Sagar....really appreciate it...:)
@spartian No problem, glad to help, thanks!
Hello,
I am currently trying to improve the HAN model and this is the best implementation of code I found on github. So first a fall, thank you for that.
As I drove deep into the code, I could not figure out one thing. In machine learning generally, we use a model to train and then use that model for validation. However, in train.py in line 110 I found below code
Now as I understand first the training happens then validation happens. My question is shouldn't this model be saved somewhere and then validation must occur using that model. Right now, the training model is trained seperately and testing model is tested seperately.
For example, In your classify.py code, you have loaded model from best_checkpoint. Shouldn't the training and testing happen like that?
Hoping for your reply