jpwahle / cs-insights-crawler

This repository implements the interaction with DBLP, information extraction and pre-processing of papers, and a client to store data to the cs-insights-backend.
https://aclanthology.org/2022.lrec-1.283.pdf
Apache License 2.0
10 stars 1 forks source link

Step 2: First look into the data #3

Closed trannel closed 3 years ago

trannel commented 3 years ago

First we should take a look into the data we have by analysing keywords and using tf-idf.

trannel commented 3 years ago

I checked what is wrong with the tf-idf function and I just overlooked that it is generating a matrix, but by using .idf_ I already got the weigthing for each feature/token out of it. I changed the function, so you could more easily access the matrix.

We can get the highest tf-idf scores using this, which gives us the following: Highest tf-idf scores in selection: [('+0.4', 1, 6.938854596835685), ('+0.6', 1, 6.938854596835685), ('+0.7', 1, 6.938854596835685), ('+1', 1, 6.938854596835685), ('-25.3', 1, 6.938854596835685), ('-50.5', 1, 6.938854596835685), followed by some links. As you can see, removing numbers isn't so trivial, as I can only give sklearn a corpus of words and putting every number in every way in it is not feasible. Though numbers do not appear in visualization done in scattertext or pyLDAvis anymore.

The issue with the counting during the demo was caused by a missing default value, so the CLI overwrote the other default value.

truas commented 3 years ago

Thanks for the update Lennart. I wonder if the .idf_ is just the inverse document part of the equation. In any case, if we can access the entire matrix it should be fine.

About the number issue. Are you removing the numerical characters before running the tf-idf? I believe this would be easier, as we treat the input before using it in any processing, right before/after the stopword removal. I'm still wondering if we should use Tfidf vectorizer instead of transformer. The former is usually used when the input is the raw documents, and the latter if you already have a count matrix. Also, in the first, several tasks can be automated with a parameter flag (e.g. stopword removal, n-grams, max features, min, regex, etc).

trannel commented 3 years ago

The CountVectorizer I use has the parameters you mentioned and creates the matrix the Tf-idf Transformer needs. I can check if the results would be the same.

The parameters are also the issue for the stopwords, as I can only pass a list of stopwords, which sklearn will remove. I think I can smuggle it into the tokenization, so numbers will also be removed. Then we would also do the stopwords removal ourselfs, because we have to check numbers with a function and can't pass a list of all numbers to remove. Maybe I also missed something and you can also pass a function.

truas commented 3 years ago

Sorry for the delay Lennart. Yes, don't overthink this. Just a regex to get rid of punctuation/numbers is enough. Essentially the stopword removal is nothing more than a simple comprehension that checks is a given word in listed or not.

trannel commented 3 years ago

I removed some numbers by casting them to a float and checking if it works for now. Tokens like +1.23/-7.23 is not removed though, but there are also quite a few other tokens that contain just punctuation that we might have to look at anyway later on.