Open geetu040 opened 1 month ago
If someone from the core-team is not already working on this, I would really love to contribute this model to huggingface with some help. Thanks.
cc: @pcuenca
Very cool! 🙌
I'll let someone from the transformers Vision team answer about how best to proceed.
Hi @geetu040 thanks for your interest, that might be a great addition to current depth estimation models in transformers! We would greatly appreciate your contribution!
Here is "how to add a model guideline" and some models that might be useful for understanding general patterns of the code in transformers:
Conversion script should follow mllama conversion script format
Feel free to open a PR and ping me if you encounter any issues. I'd be happy to help!
Feel free to open a PR and ping me if you encounter any issues. I'd be happy to help!
Thank you, I have already started working on this!
Model description
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
Depth Pro synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
Open source status
Provide useful links for the implementation