HamedBabaei / LLMs4OL

LLMs4OL:‌ Large Language Models for Ontology Learning
MIT License
87 stars 8 forks source link

Experiments on BERT-Large for baseline model creations #3

Closed HamedBabaei closed 1 year ago

HamedBabaei commented 1 year ago

Initial tasks:

We are about to test templates for datasets to come up with the best template for datasets. Templates are:

Wordnet templates:

UMLS templates: For all three datasets: [MEDICIN, USMODE, SNOMEDCT_US]

Geonames templates

For Geonames [sentence] we can use the more generic template that we designed. [NAME] is a place in [COUNTRY].

We are interested in this kind of template because of the following reasons:

Tasks are categorized into the following categories:

HamedBabaei commented 1 year ago

For the record (based on our discussion):

HamedBabaei commented 1 year ago

Precision@k and Recal@k Evaluation Metrics:

Two quantities measure how well a search engine keeps this promise. Precision is the fraction of results that are relevant. Recall is the fraction of relevant results that are returned. In other words, a recall of 1 means the results include the whole truth, while a precision of 1 means the results include nothing but the truth.

HamedBabaei commented 1 year ago

Hi @jd-coderepos, The first results on WordNet Dataset are available here:

https://github.com/HamedBabaei/LLMs4OL/blob/main/results/WN18RR/01_BERT_Large_without_finetuning.log.txt

HamedBabaei commented 1 year ago