google-research-datasets / Objectron

Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
Other
2.24k stars 263 forks source link

Missing trained models and evaluation results promised in paper #23

Open plstcharles opened 3 years ago

plstcharles commented 3 years ago

Hello,

In the recently published arXiv paper describing the dataset, there was a mention that the trained models (and their evaluation reports) would be posted on objectron.dev, which redirects here.

image

Are these coming soon?

Thanks!

ahmadyan commented 3 years ago

Some of the models (shoe, chair, camera, and cup) can be downloaded from Mediapipe website. Full set of models (along python solutions) are to be released later this month.

plstcharles commented 3 years ago

Thanks for the info, looking forward to it!

nv-nguyen commented 3 years ago

Hello, are these models coming soon? I'm also looking forward to it!

ahmadyan commented 3 years ago

The models (both for EfficientNet and MobilePose) are uploaded at gs://objectron/models You can download/list them using gsutil ls gs://objectron/models

andreazuna89 commented 2 years ago

Hi! Thanks for the great work. I am interested in models trained also on the other objects (e.g. bottle). Is there a way to use the models through the python API? Are you planning to update it?

Thanks