-
Hi @txie-93, I'm enjoying digging into the manuscript, and congratulations on its acceptance to ICLR! It is really nice to see the comparison with FTCP and other methods, and CDVAE certainly has some …
-
Hello package developers I'd like to request the [rotated MNIST dataset](https://github.com/ChaitanyaBaweja/RotNIST). It's a canonical benchmark for testing computer vision algorithms wrt rotated imag…
-
Thanks for sharing the great work and dataset.
I am currently checking the benchmark of HumanReconstruction.
https://github.com/eth-ait/4d-dress/blob/baf3e8f0857f7b22996512ba82a55c9530f268ce/datas…
-
Add a toy dataset on which we can benchmark methods. Ideally, a working detector should have (almost) perfect prediction accuracy on it.
-
Hello, thank you for sharing this incredible work. I am new to depth completion, and I would like to utilize the depth completion results for KITTI 3D object detection images. I'm currently facing som…
-
Hello, I have some questions regarding using the pre-trained models for image quality assessment from your Github repository. Specifically, I have two questions:
How can I use the pre-trained model…
-
According to https://github.com/mlcommons/inference/blob/master/Submission_Guidelines.md#expected-time-to-do-benchmark-runs
There is no constraint on the model used also except that the model must…
-
> Hi Sid, thank you for pointing us to these sources!
>
>
>
> I have summarized your points, could you confirm if I understood them correctly?
>
>
>
> - What is less important: **Planning (can …
-
Hey Felix,
there are a few papers that were published without an arXiv entry or DOI. This one is an example:
{
"id": "Argoverse 2",
"href": "https://www.argoverse.org/av2.html",
"relate…
-
### Description of feature
@jpfeuffer and @ypriverol discussed that it would be great to have 2 full datasets, LFQ and TMT for the AWS tests. We need to find from the benchmark datasets we have used …