ai-se / Pits_lda

IST journal 2017: Tuning LDA
https://github.com/amritbhanu/LDADE-package
4 stars 4 forks source link

Classification using LDA #26

Open amritbhanu opened 7 years ago

amritbhanu commented 7 years ago

Experiment Setup

We have the baseline results with no smote svm, smote svm.

amritbhanu commented 7 years ago

http://www.sciencedirect.com/science/article/pii/S0164121216300528

timm commented 7 years ago

amrit... is the paper all done? like do that before moving on

t

amritbhanu commented 7 years ago

I am on it prof

amritbhanu commented 7 years ago

@timm Here is the result of using LDA to automatically label the documents and then use a learner.

From the paper, we cant reproduce results, due to :

Experiment:

Conclusion

Results:

file

timm commented 7 years ago

am now lost in the details.

please bust fscore into precision and recall

this looks like no win with tuning... right?

please write this up as a 2-4 page pdf doc. define all your terms. dont worry about the start up sections (motivation, background)

but what is your justification for "baseline"? what papers use "baseline"?

t

amritbhanu commented 7 years ago

Yes no win with tuning, but our result numbers shown to LN might change. Conclusion might remain same or not.

My baseline results is from our BIGDSE paper, where we just used hashing trick with svm as baseline.

I will compile all these terms and my thoughts into a white paper soon.

timm commented 7 years ago

fyi- you may need to tune (1) the feature extraction (of the topics) AND (2) the learner to get improved performance.

right now ur just tuning (1) right?

without doing (2), what you could do is show conclusion instability (a venn diagram of documents classified XYZ via untuned feature extraction repeated 10 times on 10 different data orderings.

with (2) you might get the kinds of improvements wei reported

amritbhanu commented 7 years ago