MaartenGr / BERTopic

Leveraging BERT and c-TF-IDF to create easily interpretable topics.
https://maartengr.github.io/BERTopic/
MIT License
6.19k stars 765 forks source link

the 6 steps of BERTopic #2204

Open TalaN1993 opened 2 weeks ago

TalaN1993 commented 2 weeks ago

Have you searched existing issues? 🔎

Desribe the bug

Hello,

I have a question. According to the document, I understand that BERT-Topic consists of six steps, with the representation tuning step being optional. I have read many articles in a specific field that used BERTopic, but my question is why they don’t all include these five steps. For example, some articles only include embedding, dimension reduction, clustering, and weighting scheme (c-TF-IDEF). I’d like to know if each step can be omitted and whether using all five remaining steps is necessarily required?,

Reproduction

from bertopic import BERTopic

BERTopic Version

0.16.3

MaartenGr commented 2 weeks ago

For example, some articles only include embedding, dimension reduction, clustering, and weighting scheme (c-TF-IDEF).

These are actually five steps:

Although tokenization isn't mentioned, it is definitely used.

Typically, you would see those five steps with the optional representation step. If you would want to remove a step, the only that you could potentially remove is the dimensionality reduction step. All other are needed.

Many papers just implement the basic BERTopic functionality and compare it with that, which is a shame considering the representation models often improve the output significantly. I can't say their reasoning, but I wished the representing step would be included more often.

TalaN1993 commented 2 weeks ago

Thank you so much for your help and guidance.

TalaN1993 commented 2 weeks ago

Hello MaartenGr,

In my case, I used all six steps with three different respresentation models (gpt 3.5, MMR and KeyBert) with the same other five steps. I evaluated the result using OCTIS npmi and topic diversity., but the result was somewhat different from what I expected. Do you think it makes sense?,

with gpt 3.5 : (npmi_score: 0.1267 , diversity : 0.9851) with MMR : (npmi_score: 0.2625 , diversity : 0.7263) with KeyBERT: (npmi_score: 0.3027 , diversity : 0.6421)

I had intended for the NPMI value for gpt to be higher.

MaartenGr commented 2 weeks ago

It may be worthwhile to do a deep-dive into how topic coherence (and diversity) metrics work. They assume we have a list of keywords as the main representation for topics. This is true for MMR and KeyBERT but not for GPT-3.5 since that only generates a single label and not a mixture of words.