Closed aaronsarnat closed 6 years ago
We have a "hidden demo" available at http://demo.allennlp.org/wikitables-parser/MzAyNzMw.
@matt-gardner do you have any ideas for roughly how we could provide a better visualization here?
@matt-gardner is what's showing up in the "answer" box actually what you want to see for the answer? I'm a bit confused by the output.
Yeah, something about the executor broke in between when I originally wrote the demo and putting it in here. The answers are currently all wrong.
But, other than that, the content being shown in the demo is roughly the right thing to show, I think, it could just be presented in a nicer way. Not really sure on designs for how to make it look better, though.
Oh, another important issue: one of the inputs is a table. We're currently hacking something together to get a table input to the model, using a big text field, but if there were some nicer way of doing this, that'd be much better.
And one important use case that should still work with any redesign of the input is copying and pasting a table from this site: https://nlp.stanford.edu/software/sempre/wikitable/viewer/#203-35.
@matt-gardner thanks--that gives me enough to work on for now. Just wanted to know whether you had something particular and different in mind.
@matt-gardner @aaronsarnat and I are pretty unsure how to best visualize the logical forms coming out of the semantic parser. We would like to start by improving how they look and ignore the table input for now.
For reference, here's one output example that's also "formatted" by me (potentially incorrectly).
((reverse fb:cell.cell.date) ((reverse fb:row.row.season) (fb:row.row.position fb:cell.1st)))
(
(reverse fb:cell.cell.date)
(
(reverse fb:row.row.season)
(fb:row.row.position fb:cell.1st)
)
)
One question we had was whether hierplane could visualize this. It's pitch is that it can visualize any tree structure, but I'm not sure this is the best fit. If it is, that would be a fast first pass however.
Do you have some time to follow up with Aaron? I'm happy to join if it would be helpful.
Honestly, I'm a lot more concerned about the action outputs and the attention visualizations than I am about the logical form. I'm not really sure there's anything that can be done to make the logical form better. It's basically just a block of code in some language. Maybe you could add some syntax highlighting for the language it's in? This is one lisp (and two others that are coming will also be lisp), we have some (and eventually more) that is SQL.
@matt-gardner I think syntax highlighting, formatting, and maybe mouseovers would go a long way. When I look at the overall output I fixate on how I simply do not understand what's there at all.
What do you mean by "action output"?
Aaron has ideas for how to improve the attention visualization already.
I meant these parts:
If you want to do code formatting on this, I'd be sure to do it in a common lisp style (e.g., http://lisp-lang.org/style-guide/#indentation).
Re: lisp style--of course.
What tooltips would you have over "reverse" and "fb"?
@matt-gardner what's an overall good reference for the output language--http://ai2-website.s3.amazonaws.com/publications/wikitables.pdf?
The language is going to be domain-specific and not worth the work to specialize for every demo individually. I'd probably instead have a caveat in the task description that the logical form takes some outside reading to understand.
@matt-gardner @schmmd please take a look at this and give me your thoughts.
Click the thumbnail to watch an interaction mockup video I created that hopefully demonstrates how I think an improved attention visualization would work:
Caveat: I haven't scoped out the time and effort for actually building this yet. It would be a substantial amount of work, but I think it could be chunked out into smaller pieces, and rolled out according to priority.
The main feedback I'm looking for now is: is this what we actually want? If so, this could be our aspirational design that perhaps we make incremental progress towards and course correct along the way if need be...
Yes, I think that looks great!
Thanks @matt-gardner! You mentioned via slack that there probably isn't much value in surfacing the normalization type to the user in the data grid tooltips, so I won't worry about that detail for now.
Let me know if you guys have any other critical feedback about the design. In the meantime, I'll be working on scoping this out and identifying logical chunks we could roll out in a series of successive PRs.
Closing, as this issue is now being tracked in the following issues:
We have a visualization of attention, which has helped researchers significantly in debugging their models and understanding how they are working. The current demo is reasonable, but a bit rough. Particularly with the doubly-nested folding menus that animate open and closed and longer words.
Aspirational design
Issues:
Features:
Design Update (10/1/18): see https://github.com/allenai/allennlp-demo/issues/44#issuecomment-426135733