Open amritbhanu opened 7 years ago
amrit... is the paper all done? like do that before moving on
t
I am on it prof
@timm Here is the result of using LDA to automatically label the documents and then use a learner.
From the paper, we cant reproduce results, due to :
am now lost in the details.
please bust fscore into precision and recall
this looks like no win with tuning... right?
please write this up as a 2-4 page pdf doc. define all your terms. dont worry about the start up sections (motivation, background)
but what is your justification for "baseline"? what papers use "baseline"?
t
Yes no win with tuning, but our result numbers shown to LN might change. Conclusion might remain same or not.
My baseline results is from our BIGDSE paper, where we just used hashing trick with svm as baseline.
I will compile all these terms and my thoughts into a white paper soon.
fyi- you may need to tune (1) the feature extraction (of the topics) AND (2) the learner to get improved performance.
right now ur just tuning (1) right?
without doing (2), what you could do is show conclusion instability (a venn diagram of documents classified XYZ via untuned feature extraction repeated 10 times on 10 different data orderings.
with (2) you might get the kinds of improvements wei reported
Experiment Setup
We have the baseline results with no smote svm, smote svm.