Closed Karol-G closed 3 years ago
Hi Karol,
How many topics have been created? If only a few topics were created, then this results in poor topic representations. I think it is worthwhile to decrease min_topic_size
to 10 as it is likely to create more topics. This will also allow better topic representations to be created.
Also, BERTopic v0.4 just came out and contains significant improvements. I think it would be worthwhile to upgrade if you haven't so already! You can find more extensive tutorials for BERTopic in the updated documentation here.
Hi,
thanks for the quick reply! I will update to the new version and decrease the min_topic_size
to 10.
I think it is worthwhile to decrease min_topic_size to 10 as it is likely to create more topics
But why would it create more topics if I decrease the minimum number of topics? Wouldn't this result in less topics?
Best Karol
Great, let me know if you get different results. With min_topic_size
we do not decrease the minimum number of topics, but their minimum size. If you lower this value then smaller topics can be created which allows more topics to be created. I had it set way too high in the previous version of BERTopic which typically resulted in less than 10 topics, while in practice you often see >50 topics.
Ah, I misread the parameter name the entire time >.< It makes sense now ;) Thanks again!
Hi again,
I finally had time to test your new version. The results are much better now, but this could also be because I forgot to sort them by frequency in my first post >.<
Here are the top 20 topics:
[('three', 0.013054170945547257), ('threedimensional', 0.006332323651834506), ('theory', 0.004608694671529709), ('threebody', 0.0038531703016460917), ('dimensions', 0.003651438206073678), ('scattering', 0.003253732764247434), ('quantum', 0.0032394830436024377), ('spin', 0.0031230529976747625), ('gravity', 0.003063954135957652), ('space', 0.0030414565055789785)]
[('graphene', 0.07436791552953284), ('electronic', 0.011277199258163029), ('bilayer', 0.010491573310948656), ('nanoribbons', 0.007713373553312529), ('graphite', 0.006762533556490666), ('layer', 0.006583893115193778), ('electron', 0.006568532342191205), ('carbon', 0.006369050510125981), ('magnetic', 0.0061219933246205605), ('layers', 0.005525049121293272)]
[('three', 0.013689297571895319), ('3manifold', 0.008894316411136385), ('3manifolds', 0.008129555355640364), ('hyperbolic', 0.007593168927445724), ('manifolds', 0.006493555788642359), ('manifold', 0.0063464379295909415), ('algebra', 0.00530003016046757), ('dimension', 0.00509895596685615), ('threefolds', 0.004905792710294163), ('algebras', 0.0048867687578040145)]
[('magnetic', 0.013151799166270458), ('superconducting', 0.011056026119834429), ('superconductivity', 0.010798289838756974), ('measurements', 0.006968743474354005), ('crystals', 0.006860577347932697), ('magnetization', 0.0068034465649972654), ('compounds', 0.006451027082081614), ('crystal', 0.006135389676944634), ('superconductors', 0.006120444728472715), ('structural', 0.0060716763341440326)]
[('graph', 0.05730184149686418), ('graphs', 0.045628145142319054), ('vertices', 0.026819255545078042), ('vertex', 0.018389062637356766), ('edges', 0.01502037677020415), ('subgraph', 0.007530116720545857), ('algorithm', 0.007406410476551838), ('coloring', 0.006973399728935682), ('connected', 0.006774455367355813), ('trees', 0.006569312817790809)]
[('string', 0.016074269579657928), ('gauge', 0.013872350267778905), ('four', 0.01172086323512176), ('theories', 0.009231287482594374), ('n4', 0.009124189000119019), ('dimensions', 0.008111317974220343), ('fourdimensional', 0.007788796978357288), ('supersymmetric', 0.0066570476967985565), ('4d', 0.005515783327101184), ('dimensional', 0.005240544660597792)]
[('regression', 0.015793994072434116), ('estimator', 0.012110553516710846), ('estimation', 0.010565802980851507), ('estimators', 0.009744969913795903), ('likelihood', 0.008671026207819644), ('distribution', 0.007834517888185543), ('inference', 0.007831399683834164), ('sampling', 0.00614168314645444), ('probability', 0.0056646979082951516), ('sample', 0.005618290983450382)]
[('condensate', 0.03563380736937094), ('boseeinstein', 0.03281346083163922), ('bose', 0.024604853303908294), ('condensates', 0.01590529015896842), ('condensation', 0.010063618540648899), ('atoms', 0.009020629389566267), ('solitons', 0.007574204763128568), ('gases', 0.00750277997557609), ('atomic', 0.005728053771570647), ('soliton', 0.005496361596549815)]
[('wireless', 0.043265644969453974), ('network', 0.026624172093601146), ('networks', 0.022602367756822363), ('nodes', 0.020314901727286453), ('routing', 0.012198552511664502), ('relay', 0.011443673627123568), ('transmission', 0.010778962688679476), ('protocols', 0.007792765016682918), ('coding', 0.007262368994436775), ('packet', 0.007088404364556387)]
[('algebra', 0.009508399679971644), ('spaces', 0.00874683359537929), ('algebras', 0.008621164547943608), ('finite', 0.00766028321891464), ('manifolds', 0.007345115657771375), ('finitely', 0.006817928828699886), ('manifold', 0.006794802606806686), ('metric', 0.005758919093004792), ('theorem', 0.005549398118729717), ('cohomology', 0.005329239199870784)]
[('financial', 0.02778448518234804), ('stock', 0.021005221083078732), ('asset', 0.014279006519891203), ('portfolio', 0.013997439694449176), ('pricing', 0.013211643593096296), ('trading', 0.013156962940594492), ('investment', 0.008495616180998393), ('stocks', 0.008361230495723947), ('assets', 0.008236565953627702), ('insurance', 0.005971728435419685)]
[('interference', 0.024713428079488675), ('receiver', 0.013857175281490372), ('transmitter', 0.01360526856123733), ('transmit', 0.012691929298860245), ('coding', 0.011635803821757558), ('transmission', 0.011566654683392375), ('antennas', 0.01123231845041445), ('broadcast', 0.008593780357944728), ('beamforming', 0.008552448819680583), ('receivers', 0.008459943014328328)]
[('hubbard', 0.053493199507069066), ('lattice', 0.010758592075718717), ('fermi', 0.0072053076967158935), ('antiferromagnetic', 0.007181871258321772), ('insulator', 0.007051933547436705), ('approximation', 0.006036515701808201), ('coupling', 0.00588917432860099), ('halffilling', 0.0057487305439479375), ('interactions', 0.005587492856648944), ('correlations', 0.0055763764430453375)]
[('function', 0.008651916818150886), ('denote', 0.007809245922609828), ('set', 0.007478684066622141), ('integer', 0.007374301037950408), ('integers', 0.0069542814232360205), ('omega', 0.006667642786285545), ('mathbb', 0.0060961646394778754), ('alpha', 0.006039583618754279), ('functions', 0.006017594883429485), ('bounded', 0.005814677932831883)]
[('inflation', 0.05931669348358714), ('inflationary', 0.02232483204321207), ('universe', 0.013689775008603551), ('perturbations', 0.012286552051758052), ('cosmological', 0.010828376490552719), ('slowroll', 0.009469809586834922), ('gravitational', 0.009136121565873076), ('gravity', 0.0071959489811766865), ('fluctuations', 0.006429827629092597), ('hubble', 0.005171651035312848)]
[('planets', 0.046814197426740575), ('planet', 0.03501459176392504), ('planetary', 0.018808237468975954), ('orbits', 0.011441766784746725), ('orbit', 0.008467096082013886), ('jupiter', 0.0077181769914923416), ('planetesimals', 0.0073672413745058335), ('asteroid', 0.0067335901530106036), ('exoplanets', 0.006022745742365477), ('asteroids', 0.00585370806168899)]
[('solar', 0.039421087365066554), ('sun', 0.011732411452529146), ('sunspot', 0.010430314535179447), ('photosphere', 0.009067243547239295), ('photospheric', 0.00865958912310833), ('convection', 0.008064198950269004), ('atmosphere', 0.0072483168826311725), ('observations', 0.006876844797981044), ('chromosphere', 0.006262490703920049), ('heating', 0.006055562210536541)]
[('observations', 0.008699188735815778), ('galaxies', 0.008407485953038762), ('galaxy', 0.008073430332696623), ('luminosity', 0.007162840940254354), ('stars', 0.006966175063807207), ('stellar', 0.005349763782737194), ('telescope', 0.005287464577458983), ('spectra', 0.005136408088761508), ('optical', 0.005035509490418128), ('ngc', 0.004829665052611631)]
[('neurons', 0.038221238895353046), ('brain', 0.02920640451360389), ('neural', 0.02446339583121359), ('neuronal', 0.01643279040940504), ('spike', 0.014812396510321582), ('neuron', 0.012965641652609569), ('cortex', 0.007331964185225165), ('spikes', 0.006943257568279404), ('cells', 0.00632549932973966), ('fmri', 0.005463485659527367)]
[('algebras', 0.029831387237084984), ('algebra', 0.024734690221066107), ('homotopy', 0.015433122770034542), ('cohomology', 0.015197981658463339), ('sheaves', 0.009816480887828655), ('algebraic', 0.009377566200273487), ('functors', 0.00822133504218289), ('complexes', 0.006714333078347921), ('theorem', 0.006619095606144724), ('groupoids', 0.006472502953402093)]
One thing I noticed it that the HDBSCAN clustering with the abstracts takes about 1 day (which is ok), but when taking only the titles (which are much shorter then the abstracts) the clustering takes more than a week (I aborted it so it could even be longer). I tried this two times and both times it seemed to run forever. Do you have an idea why shorter texts take so much longer then longer ones?
Best Karol
Great! The topic representations definitely seem much better now. I am not entirely sure, but it seems that calculating the probabilities might actually be an inefficient step that increases the computational time. Have you tried to set calculate_probabilities
to False? This could speed up the model significantly!
I will set calculate_probabilities
to false and see what happens. What is calculate_probabilities
used for? Isn't it used for calculating the top N words describing a topic or am I mistaken?
No, it is used to find the soft-clustering output from HDBSCAN
. It is likely that calculating the probabilities, especially when you have hundreds of topics and millions of documents, takes quite a while to finish. Setting it to False completely skips this step such that computation will be faster.
Sorry for the late reply. It is really fast now, thanks!
Hi,
thanks for your amazing work! However, I currently still have some problems on getting good results. I want to use BERTopic on the kaggle arxiv abstract dataset https://www.kaggle.com/Cornell-University/arxiv It is a dataset that contains the abstract of each paper on arxiv. In total 1796908 abstracts, but I am using only 1/4 of them due to hardware constraints, so 449227 abstracts. The raw data is a list of dicts with each dict containing stuff like author, title, abstract and etc. but I am only using the abstracts itself. My current results are sadly not what I expected. Here is the output of
model.get_topics()
:As you can see, the extracted topics are kind of bad and not what I have hoped for. Can you give me some advice why this is not working and what I should finetune?
Best Karol