Closed DragonBoyL closed 1 year ago
Hi @DragonBoyL,
Thanks for your interest in our work.
You're aiming to use our models for estimating the depth of your own images, right? Just to clarify, our models, as mentioned in issue #2, were trained on a single specific dataset and might not be ideal for out-of-distribution data.
The images you shared are quite different from the nuScenes or RobotCar datasets we used. For instance, the CCTV image is not something our models are familiar with. We worked with driving scenes taken from car-mounted cameras. Your images appear to be taken from different vantage points, which our models haven't been trained on. For optimal results, you might want to train the models using images similar to yours.
Regarding the top left artifact, it's hard to pinpoint the reason without more details. Could it be related to the way you loaded the model or processed the images? Have you tried checking with standard dataset images, like this one, to see if the issue persists? Additionally, our models were optimized for specific image resolutions (e.g., 576x320 for nuScenes), so using different sizes might lead to inconsistencies.
We hope this helps clear up your doubts. If you have other questions, please let us know.
Closing for inactivity. Feel free to reopen the issue if you have follow-up questions.
Hello, can this model be used for other pictures? Why can't I get good results when using my own pictures, even if the pictures are similar to the test pictures, but there is no effect. In addition, we found that the final effect drawing will appear on the top left of the whole picture, may I ask why this is, is it a configuration problem? md4all_question.docx