allenai / allennlp-semparse

A framework for building semantic parsers (including neural module networks) with AllenNLP, built by the authors of AllenNLP
Apache License 2.0
107 stars 24 forks source link

Why do you use type embedding instead of entity embedding in wikitable decoder? #12

Closed entslscheia closed 4 years ago

entslscheia commented 4 years ago

According to paper Neural Semantic Parsing with Type Constraints for Semi-Structured Tables Jayant. The embedding for each entity (i.e., table constant) is calculated by its neighbors and its type. However, during decoding, if it outputs an action of selecting some entity at a certain step, then the input for the next step will be the embedding for the type of the entity that has been selected. Why don't you directly use the embedding for the specific entity as input? I feel it's kind of inconsistent to use another type embedding here. Also, isn't that too coarse to only use a type embedding? This means different entities of the same type will lead to exactly the same input embedding to the next step of decoding. Intuitively, I feel it's essential for us to know what exact entity has been selected in the previous step, not just its type. For example, selecting column_a and selecting column_b should have different influences on the subsequent prediction.

matt-gardner commented 4 years ago

I don't remember why Jayant decided to do the model that way. There were a number of small decisions like this in the original parser that seem suboptimal in hindsight; feel free to try something different and see what happens!

entslscheia commented 4 years ago

I don't remember why Jayant decided to do the model that way. There were a number of small decisions like this in the original parser that seem suboptimal in hindsight; feel free to try something different and see what happens!

I see. Thanks for the reply!