Closed tomseinen closed 10 months ago
The "UMLS full" model is certainly not the one reference in the paper. It has been trained relatively recently.
I wasn't part of the team back then, but the model they would have used would almost certainly have been a "v0.x" model. I don't think these are still publicly available anywhere. Though I could be wrong.
Thank you for the answer, @mart-r, Ah ok, yea I understand that those early models are not available anymore. But it was nice to see some performance metrics of the model.
Are the available models validated on any annotated corpora? I would be nice to see their metrics.
I am running the models against ShARe/CLEF and MedMentions and get a lower performance than the original model presented in the publication.
ShareClef: | Model | R | P | F1 |
---|---|---|---|---|
Publication model | 0.688 | 0.796 | 0.74 | |
UMLS small | 0.47 | 0.76 | 0.58 | |
UMLS full | 0.24 | 0.67 | 0.47 |
MedMentions: | Model | R | P | F1 |
---|---|---|---|---|
Publication model | 0.500 | 0.406 | 0.448 | |
UMLS small | 0.12 | 0.38 | 0.18 | |
UMLS full | 0.17 | 0.67 | 0.23 |
An extraction was correct if the span overlapped and the CUI was correct, as reported in the publication: " For each manual annotation we check whether it was detected and linked to the correct Unified Medical Language System (UMLS) concept.".
What do you think is the reason for this drop in performance?
The model(s) used in the paper were trained in self-supervised as well as supervised capacity. However, no supervised training was performed for the models publicly available through the link in the README. This may not account for all the performance drop, but probably most of it.
The MedMentions model that's used within the tutorials (e.g Part 3.2) did receive some supervised training (on the MedMentions dataset). You can try that one as well. But it also won't be identical to the models within the paper.
As with most machine learning models, a model is best at the task it's produced to perform. These public models weren't produced to break any performance records. They are there for use case demonstration.
Though in general, for discussion, I recommend the discourse: https://discourse.cogstack.org/ Not many people monitor the GitHub repo issues. There's also some discussion about public models and their training in there already.
Hi,
Very promising work; I have a question related to the NER+L validation of MedCAT.
Is the model used in your publication (https://arxiv.org/abs/2010.01165) for the NER+L validation on ShARe/CLEF and MedMentions the same as the "UMLS full" model that is available for download through the shared link? Or, was it fine-tuned further to attain the results presented in the publication?
The paper: "We train MedCAT self-supervised on MIMIC-III configured with the UMLS database." And the link in the readme: "UMLS Full. >4MM concepts trained self-supervsied on MIMIC-III. v2022AA of UMLS."