Open TommyJones opened 8 years ago
Are you thinking about adding a correspondence analysis (CA) option as well? Arguably, CA could be tapping into underlying linguistic properties a bit better than PCA.
I hadn't thought about it. From this paper ( http://www.aclweb.org/anthology/W08-2007) it seems that this would work on a term co-occurrence matrix, not a document-term matrix, right? I have no problem implementing CA, though it depends on two things. I'll have to wait for text2vec version 0.3 to be released (coming soon) to get the term co-occurrence matrix. And I'd have to look into whether or not implementations of some of the intermediate methods exist for sparse matrices. (If not, I may be able to make them myself.)
If you want, you can open up an issue for me to look into this. I'll do my best.
On Mon, Mar 21, 2016 at 2:03 PM smikhaylov notifications@github.com wrote:
Are you thinking about adding a correspondence analysis (CA) option as well? Arguably, CA could be tapping into underlying linguistic properties a bit better than PCA.
— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/TommyJones/textmineR/issues/12#issuecomment-199404568
You can set it up on DFM directly. That's how it's implemented in quanteda textmodel_ca function.
It's calling ca package. Another option is vegan package. Vegan is widely used in ecology and has more functionality.
Btw, quanteda is another higher-level framework implementation.
I will look at quanteda as well. I'm going to do benchmarks on SVD from irlba, RSpectra, and quanteda. I'll implement the version that seems fastest/most scalable. At the end of the day, all of LSA, PCA, and CA rely on SVD. So, it's just a matter of which one works best.
It seems that all three of textmineR, text2vec, and quanteda use the same data type. I am in the process of reworking textmineR to be a higher-level package, built on text2vec. @dselivanov has done an amazing job at creating a framework that works faster and is more scalable than any other I've seen (in any language), at least on a single machine. Maybe the quanteda maintainers might want to do the same?
My current plan (not written anywhere on GitHub) is to create wrappers for...
The goal is to have a library that uses similar syntax and returns similar objects to get a wide range of topic models so users don't have to hunt them all down. My personal PhD research focuses on evaluation metrics for topic models. So, textmineR has that functionality as well.
I think that sounds really good. And combination with text2vec is great. Looking forward to see the development.
Add option to get PCA from LSA model