google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.13k stars 216 forks source link

Showing cell selection / snaphot of reference answer #101

Open ajay01994 opened 3 years ago

ajay01994 commented 3 years ago

Hi Team ,

Thanks for releasing TAPAS , really appreciate your efforts towards the new way of semantic parsing using NLP .I ran your colab demo and it showed quite decent results on both sample & custom data. Team, currently the colab demo which you have mentioned here in this repo gives direct answers to the query , however i wished to show the cell selection /snapshot of the reference cells from which the answer is predicted in 3X3 or 4X 4 matrix , could you please help me with on how to do it or throw some light over the matter . Looking forward for your response .

Regards

Ajay sahu

ajay01994 commented 3 years ago

Hi , any update on this ?

SyrineKrichene commented 2 years ago

Hi Ajay sahu,

You can look at the colab: https://colab.sandbox.google.com/github/google-research/tapas/blob/master/notebooks/wtq_predictions.ipynb#scrollTo=9RlvgDAmCNtP the final function it is reading the output predictions and getting the answer coordinates:

for row in reader: coordinates = sorted(prediction_utils.parse_coordinates(row["answer_coordinates"])) all_coordinates.append(coordinates) answers = ', '.join([table[row + 1][col] for row, col in coordinates])

Instead of outputting only the answers you can highlight / use different color of the table cell. Note that we are already selecting the full cell and not some tokens per cell in this colab.

Thanks, Syrine