TyJK / EchoBurst

A browser extension that utilizes sentiment analysis to find and highlight constructive comments on various social media platforms that oppose the users worldview in order to encourage them to break out of the echo chambers the internet has allowed us to construct.
MIT License
39 stars 4 forks source link

NLP Models and Data Collection Discussion #8

Open TyJK opened 7 years ago

TyJK commented 7 years ago

A Discussion on the Best NLP and Data Collection Approaches

This is a place we hope we can generate discussion, with both experts and non-experts, on how we're planning on moving forward in the immediate future towards a classification model for topic modeling and sentiment analysis. We've included data collection in this, as none of this can proceed until we have some labelled data.

The scope of this discussion can include:

A Brief Overview of Our Current Plan

We welcome questions and suggestions with regards to these topics, so please feel free to drop a comment.

PSanni commented 7 years ago

Classification: What classification approach you are planning to use, Documents or Sentence based ??. Because, as you might know, if you are taking sentence based approach then you need set of labeled sentences. :)

TyJK commented 7 years ago

We would be looking at document based, with each website being assigned labels for sentiment and topic, as well as each post, comment, entry (however it's organized) being given a unique document id which would just be assigned through enumeration most likely. So when it came to model construction, a given document would be given a unique ID, but would also be a part of larger groups based on the other tags.

PikioopSo commented 7 years ago

@TyJK, the document id could be assigned VIA date data, so that you can do a analysis through spans of time, but I wasn't quite sure what type of enumeration system you were going with.

TyJK commented 7 years ago

@PiReel I was going to use a simple count. Doc2Vec only requires that documents be unique in order to keep them separate (all documents sharing the same tag are treated as one document). It probably doesn't matter for the number of documents we'll get but by enumerating linearly it saves memory. Luckily in my experiments so far, it naturally organizes by date, since that's usually how it's organized on the site archive. I'll have a few examples of test runs up later today.

PSanni commented 7 years ago

Great, if we are using document than we need to select websites and topic content carefully because there are higher chances of diverse information on same content or website. And that can easily screw up model. We can include a subjectivity classification, so by subjectivity, we can remove unuseful sentences/information.

I am not sure but, I think Word2Vec might be able to do this? I haven't tried it. Does anyone aware of that ???

TyJK commented 7 years ago

Word2Vec probably could do this, but I think we might need to use it as a secondary filter rather than a primary. ie, I think we could run it on each website/video transcript after it had been scraped and cleaned to make sure nothing that wasn't a part of the category got through, but I don't know how we could use it to help in the selection process itself. Hopefully people will be careful, we do mention it numerous times but if you have any suggestions for ways to make it clearer to people I'm all ears.