Open entslscheia opened 5 years ago
I assume you're talking about the wikitables parser? You can see the optimization that is done here: https://github.com/allenai/allennlp-semparse/blob/937d5945488a33c61d0047bd74d8106e60340bbd/allennlp_semparse/domain_languages/wikitables_language.py#L68-L102
(there are also a few other places in the constructor that are relevant, but what I highlighted is the bulk of it)
The point is that if there are no columns in the table that are dates, we shouldn't ever consider date-based actions.
I assume you're talking about the wikitables parser? You can see the optimization that is done here:
(there are also a few other places in the constructor that are relevant, but what I highlighted is the bulk of it)
The point is that if there are no columns in the table that are dates, we shouldn't ever consider date-based actions.
Thanks! That makes sense.
Now I am trying to define my own DomainLanguage
. In my case, I have some constants of type Xi
or Tuple(Xi, Xj)
(I simply define a class for each Xi
, say i
could be from 1 to N). In other words, each constant can be either unary or binary. If it is unary, then its type can be one of the N Xi
, if it is binary, then its type is a 2-tuple whose two elements both take a type from the N Xi
.
Based on this, I want to define a predicate (function) that takes a unary and a binary as input. And the types of two parameters must satisfy that the first element of the binary must have the same type as the unary. For example, (a: X1, b: Tuple[X1, Xk]) , (a: X1, b: Tuple[X1, Xl]), (a: X2, b: Tuple[X2, Xj]) , (a: X3, b: Tuple[X3, Xm])
are all legal inputs while (a: X1, b: Tuple[X2, Xk])
is illegal because X1
doesn't match X2
. Do you think the DomainLanguage
class can meet my needs? For now, it seems to me like I can use something like (a: Generic[X], b: Tuple[Generic[X], Generic[Y]])
to achieve this. But I am not sure about it.
So, this kind of polymorphism is not something that we handle right now. We had some crazy logic for Placeholder
types in the nltk grammar code, but I didn't implement anything for this in the DomainLanguage
grammar induction code. Let's say I have a function defined like this:
def my_func(a: Generic[X], b: Generic[X]) -> X:
...
And a type hierarchy like this:
class X:
pass
class X1:
pass
class X2:
pass
What rules should be added to the grammar for my_func
? How do they interact with other functions that take X
, or subclasses of X
? If you can define this precisely, you could make a set of test cases that would make it relatively straightforward to implement the desired behavior when you encounter Generic
during grammar induction. Deciding on what exactly should happen in cases like this seems like the hard part, though. I haven't thought about this too much, and I'm not sure what the right thing to do is. It's probably easier to try to design a type system without any class hierarchies.
So, this kind of polymorphism is not something that we handle right now. We had some crazy logic for
Placeholder
types in the nltk grammar code, but I didn't implement anything for this in theDomainLanguage
grammar induction code. Let's say I have a function defined like this:def my_func(a: Generic[X], b: Generic[X]) -> X: ...
And a type hierarchy like this:
class X: pass class X1: pass class X2: pass
What rules should be added to the grammar for
my_func
? How do they interact with other functions that takeX
, or subclasses ofX
? If you can define this precisely, you could make a set of test cases that would make it relatively straightforward to implement the desired behavior when you encounterGeneric
during grammar induction. Deciding on what exactly should happen in cases like this seems like the hard part, though. I haven't thought about this too much, and I'm not sure what the right thing to do is. It's probably easier to try to design a type system without any class hierarchies.
So let's just consider the simplest case. Say I have two classes and assume no class hierarchy.
class X1:
pass
class X2:
pass
I want a function my_func
that takes two parameters to be only defined for the following type constraints: (a:X1, b:Tuple[X1, X1]), (a: X1, b: Tuple[X1, X2]), (a: X2, b: Tuple[X2, X1]) and (a: X2, b: Tuple[X2, X2])
. Is it possible to manage this using DomainLanguage
?
In fact, my need is just to define a JOIN function for my language. So the join function should take inputs of type <A,B> and type <B, C>, and then returns a output of type <A, C>. I think this is to some extent a common need, but now it seems kind of convoluted to define it.
Or maybe I can try to solve it in another way, i.e., I can define multiple JOIN functions for different input types (e.g., JOIN1, JOIN2, JOIN3...), then the problem becomes is it possible for those functions with different names to all be represented as predicate 'JOIN' in logic forms? For the current implementation, it looks like you are using each function name directly as a predicate in production rules. But if I can unbind a predicate from the strict function name and define the representation for it more freely it will solve my problem, even I would need to define thousands or even millions of different JOIN functions.
So, you have a few options:
Define separate predicates for each specific type. They could possibly all call the same underlying function, but they enforce proper type constraints this way through their production rules. You would have a separate production (and separate parameters) for each possible type instantiation.
Define one join()
function, on the base type, and register constants (and predicates that return subclasses of the base type) with multiple types, as we do for numbers and dates in the wikitables language (they both are also "comparable"). (Caveat: I don't think this currently works for functions, only for constants.) If you stop here, you at least can use functions that operate on the base type (and only one set of parameters for that function), while also working with constants that have more specific types.
Do what was specified in 2, but then in the GrammarStatelet
keep track of which specific type was generated after the first production, and constrain the second production to match. That is, say you want to enforce a generic function on X, where all the X subclasses in a given call to the function are the same type. You'd get a production rule like bool -> [<X,X:bool>, X, X]
. <X,X:bool>
would be your function production (maybe <X,X:bool> -> equals
), and then you have two X -> some subtree
production rules to generate. After generating a specific subtype of X for the first one, you could store that type in your GrammarStatelet
, and constrain the allowed actions when generating the second X to match the type. Does this make sense?
It's not exactly the same, but I know that @benbogin did something somewhat similar with runtime constraints on productions in the grammar when producing joins in his text2sql work. You might have a look at how he did it in his code. He was using a parsimonious-based grammar instead of a DomainLanguage
, but the modifications to the state during decoding should be similar.
Thanks for your explanation! So the first option will introduce multiple different predicates into my logic forms (e.g., 'JOIN1', 'JOIN2', 'JOIN3',...), right? That's what I don't want. I only want one predicate 'JOIN' in my logic forms. So maybe I should go with the second option?
Yes, the first option introduces multiple predicates. When you have just a few, this is an easy way to accomplish what you're trying to do (we've done this a few times when we had two different possible types). When you have more than 10, this is probably not a good idea (though I'm not totally sure). It's also interesting to explore how this affects learnability - it pushes the type decision higher up in the syntax tree, which may or may not be helpful (again, I'm really not sure). There are all kinds of issues around how grammar design affects parsing performance and generalizability, and I don't think anyone has explored these at all. It would make for a really interesting research paper =).
Just found another way to address my problem:
class TestLanguage(DomainLanguage):
def __init__(self, ...):
...
for a in types: # types is the set of all possible types we want to define JOIN over it
for b in types:
for c in types:
def JOIN(x: Tuple[a, b], y: Tuple[b, c]) -> Tuple[a, c]:
...
self.add_predicate('JOIN', JOIN)
...
This works perfectly for me. Logic forms generated from this language are naturally guaranteed to be type compatible, and there is no need to modify GrammarStatelet
to do some double-checking. I think maybe you can add something like this to the tutorial of DomainLanguage
in the future if you think this can be a general solution for such cases.
Edited: This will cause the following problem when calling logical_form_to_action_sequence
:
allennlp.semparse.common.errors.ParsingError: 'JOIN had multiple types; this is not yet supported for functions'
, but it works fine for generating production rules, executing logic forms and getting logical form from action sequence. So I think the only thing I need to do is to make logical_form_to_action_sequence
to support functions with multiple types. It doesn't look that hard.
Yes, this is related to option 2 that I mentioned above (the caveat that I gave was talking about exactly this error).
It's actually challenging to implement this, because you need to do type inference on the arguments to the function to determine which type you actually want for a particular logical form. I only did very rudimentary type inference in logical_form_to_action_sequence
; you need something much more complex to handle functions with multiple possible types. If you want to give it a shot, feel free.
I guess what I don't quite get about option 2 is how do we generate subtype production rules? For instance, in your example that bool -> [<X,X:bool>, X, X]
, <X,X:bool> -> equals
, let's say equals
is defined on the base type as you suggested, then X
should be the base type. What happens next is that we will have to pick one production rule from X -> constant1, X -> constant2...
for the first X
since types constants are registered with include the base type, but what you suggested looks like we will pick some X -> Xi
first, where Xi
is a subclass of X
, and then pick some Xi -> constant_j
, so that we can store Xi
in GrammarStatelet
to make sure we pick a compatible production rule for the second X
. But how to introduce rules like X -> Xi
? I only know how rules are induced from predicates and constants but have no idea about how to induce rules from the type hierarchy. Do I understand that correctly?
If you look in the wikitables language, we register some constants with multiple types (this currently works; it just doesn't work for functions, which are more complicated to do type inference for). So we have a rule X -> constant1
and Xi -> constant1
. What you could do in the GrammarStatelet
is check the subtype of constant1
when you generate the rule X -> constant1
, and then change the X
that's on the top of the stack (for the second argument to equals
) to Xi
.
That's the basic idea; I can give more detail if you need it. I also only outlined how the very simplest case would work; this could get pretty complicated if you have a whole lot of other stuff going on in your type system.
Yes, this is related to option 2 that I mentioned above (the caveat that I gave was talking about exactly this error).
It's actually challenging to implement this, because you need to do type inference on the arguments to the function to determine which type you actually want for a particular logical form. I only did very rudimentary type inference in
logical_form_to_action_sequence
; you need something much more complex to handle functions with multiple possible types. If you want to give it a shot, feel free.
Hi, I have implemented a new logical_from_to_action_sequence
that can support type inference for both multi-type constants and functions. The idea is, instead of generating the transition actions top-down, my algorithm generates the actions in a bottom-up manner. And since both constants and functions can be multi-typed now, it's possible to have multiple action sequences for a single logical form. Maybe later I can make a PR on this.
Does your function result in the same action sequence as the existing function for our current DomainLanguages
? Or does it change the order? If it changes the order, you need to also implement execution and action_sequence_to_logical_form
, and it might make sense to subclass DomainLanguage
, or have a separate class, or something (or a constructor parameter that selects between top down and bottom up, and we still keep the same method names...?). If it doesn't change the order, then yes, a PR would be welcome.
Does your function result in the same action sequence as the existing function for our current
DomainLanguages
? Or does it change the order? If it changes the order, you need to also implement execution andaction_sequence_to_logical_form
, and it might make sense to subclassDomainLanguage
, or have a separate class, or something (or a constructor parameter that selects between top down and bottom up, and we still keep the same method names...?). If it doesn't change the order, then yes, a PR would be welcome.
I didn't change the order of the actions. I compared my algorithm on languages with single-type functions that can be handled by yours, and my algorithm generates exactly the same actions as yours. The only difference is that since it's possible to have different action sequences for a logical form in my scenario (it does make sense when you both have multi-typed constants and functions), my return type is List[List[str]]
instead of List[str]
, which is returned by the original implementation. I think maybe I can try more test cases and refine the code a little bit to make a PR after the ACL deadline.
There are two kinds of actions in allennlp semantic parsing framework, i.e., global actions and linked actions, where linked actions are instance-specific actions. Since global actions do not depend on a specific instance, so I expect possible global actions to be the same for all instances, however, when I printed them, I surprisingly found that global actions also vary from instance to instance. Why is that? Have you done some pruning over global actions based on each instance to optimize the search procedure?