eg-nlp-community / nlp-reading-group

12 stars 0 forks source link

[12/04/2020] 6pm GMT+2 - Language Models as Knowledge Bases? #7

Closed Omarito2412 closed 4 years ago

Omarito2412 commented 4 years ago

Join us at our discussion of Language Models as Knowledge Bases? on Sunday 12th of April. Paper's link: https://arxiv.org/abs/1909.01066

Hangout: https://hangouts.google.com/group/kUxBAunjGittAkBUA

hadyelsahar commented 4 years ago

Doing my HW a bit early :)

I like the work seems they did the obvious. My only critics:

Follow up work: Negated LAMA: Birds cannot fly https://arxiv.org/abs/1911.03343 BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA

https://arxiv.org/abs/1911.03681

ibrahimsharaf commented 4 years ago
Omarito2412 commented 4 years ago

I like this paper's analysis, I think it's straightforward. What they're trying to do is this:

The authors do not mention this, but this task can be thought of from two directions:

  1. Whether pre-trained LMs are good at retrieval from unstructured text
  2. Or whether pre-trained LMs can do inference from facts they find in unstructured text

I think the paper's analysis is more aligned with point 1. The authors state a few constraints they face in their analysis like having the models trained on different vocabularies and how different parameters like beam size can effect model prediction and that's why they tend to relax as much of these constraints as possible.

They show that there's a correlation between subject mentions, entity mention with Precision@1 for BERT which can be interpreted as: BERT has already seen given examples before which makes it more of a retrieval assessment rather than an inference task.

I'm interested in reading more work that builds upon this analysis, and I'm sticking to the assumption that the analysis is to assess the retrieval capabilities of pre-trained LMs from text corpora.