Table Transformer (TATR) is a deep learning model for extracting tables from unstructured documents (PDFs and images). This is also the official repository for the PubTables-1M dataset and GriTS evaluation metric.
MIT License
2.22k
stars
247
forks
source link
Is there any way to get the coordinates(polygon shape returns)? #96
Hi, Thank you for update recently.
I think this model is also following DETR's output like bounding box (x, y, w, h).
Is there any way to get the polygon shaped bbox? (before : x, y, w, h -> after : x1, y1, x2, y2, x3, y3, x4, y4).
My dataset has not only rotated but also, distorted like convex shape.
I can also apply Segmentation task, but segmentation accuracy is really low in my data, so I can't get the contour of that segmentation mask.
Hi, Thank you for update recently. I think this model is also following DETR's output like bounding box (x, y, w, h). Is there any way to get the polygon shaped bbox? (before : x, y, w, h -> after : x1, y1, x2, y2, x3, y3, x4, y4).
My dataset has not only rotated but also, distorted like convex shape. I can also apply Segmentation task, but segmentation accuracy is really low in my data, so I can't get the contour of that segmentation mask.
If you have any ideas, please help me :)