Open aaron-imani opened 1 year ago
You do not seem to be loading in the backend the same way between saving and loading the topic model. Could you apply it in the same way and try it out again?
Also, could you show an example of the issue you experience with find_topics
? It would help in understanding the issue.
I tried loading it in the same way as well, but it still shows the same behavior:
embedding_model = 'microsoft/codebert-base'
hf_pipeline = pipeline('feature-extraction', embedding_model)
topic_model = BERTopic.load("models/codebert",
embedding_model=hf_pipeline)
Here is an example of trying different keywords:
There are two problems with the output: 1- Similarity of topics is so high (It shouldn't be like that based on my experiment with sentence transformers) 2- Although the topics within the topic model are coherent, the returned topics are irrelevant to the search query.
1- Similarity of topics is so high (It shouldn't be like that based on my experiment with sentence transformers)
Ah, now I understand! It is actually expected behavior since the models within sentence-transformers are optimized for similarity tasks, regular BERT-like models are not and will typically output very high similarity scores. That is also the reason why sentence-transformers are the default models used in BERTopic, they outperform regular BERT models by a very large margin. An overview of very strong models to be used in BERTopic can be found here.
I see. Thank you for your guidance! Should I look at the "Clustering" section of the provided link for the comparison? Which tab includes the appropriate comparison?
You would generally look at the clustering tab.
Hello, I have fit the BERTopic model using CUML's HDBSCAN and UMAP. I used
microsoft/codebert-base
on huggingface as the embedding model like this:The above code runs without any problems. This is how I load the model:
Although invoking the get_topic_info method from topic_model returns a list of meaningful topics, when I call find_topics, no matter what the term is, a list of the same topics is returned with almost the same probabilities. The same thing happened when I used another Huggingface model. Are there any potential workarounds?