-
Experiment and Implement classification models -
1. Decision Tree
2. Random Forest
3. Logistic Regression
4. Naive Bayes models.
Evaluate and compare models on metrics (accuracy, precision, …
-
Hello author, I would like to know how you implement the true-zero rate and F1-score, I wonder if you can make the code of these two public
-
Hi,When I change the language into Chinese,CAPTURE can not deal with it. The score is 0:
CAPTURE: 0.0
[0.0]
[{'sample_key': 'example_0', 'gt_parsed': [([], {}, set())], 'cand_parsed': ([], {}, …
-
Hi, wondering how you calculated F1 score. Wikipedia says:
![image](https://user-images.githubusercontent.com/35657106/92929396-23e19e00-f449-11ea-8b80-d0fadddb7002.png)
Here is how I would calc…
-
It looks like the F1 score and other metrics reported in the paper use the PA adjustment method.
I want to flag that this method has been shown to overestimate performance.
> Kim, S., Choi, K., …
-
The bug is here https://github.com/autrainer/autrainer/blob/2cdd8bb64ca6550a69a58e22bf291664c6d2a9f0/autrainer/training/training.py#L814
and is caused by the use of `average='micro'` or `average='m…
-
I tried evaluating a single sentence against itself and I got a Smatch score greater than one (!!), any idea why?
Thank you.
Details below:
python smatchnew/smatch/smatch.py -f q3.txt q3.txt
F…
-
Hello!
I am trying to apply your method(threshold classifier based on PHD or MLE) for Ghostbusters(https://github.com/vivek3141/ghostbuster) dataset. Here is my code, which is based on yours:
```
…
-
Dear Author,
We are very interested in your work, so we retrained the model following your steps (by the way, it seems that your documentation does not specify the version of seqeval, and the vers…
-
Attempting to retrain the model with MR=0.0 results to significantly lower performance (F1 .55, compared to .85 in the paper).
Moreover, even loading the pretrained model with MR=0.0 yields about …