-
Thanks for your great work. NYU data contains 2,000 training images. Are only the first 1,000 images used for training in your work?
-
Hi @MarkMoHR ,
Thanks for your repo. I found some works on edge detection for RGB-D you might interest in:
- Ren, X., Bo, L.: Discriminatively trained sparse code gradients for contour detection. …
-
Hello,
I am currently working on **deploying an nnUNetv2 model** and would appreciate some guidance on the best practices for doing so. Below is the inference code I am using. Could you please prov…
-
RGB images with alpha channel haven't been tested yet.
See #1.
-
I want to train the model and i have input images into RGB format and corresponding corrected images into sRGB format.
I tried to train the model and I'm only able to achieve 16.47 PSNR value.
Ho…
-
*Note: according to CONTRIBUTING, there should be issue templates, but i don't see any... just FYI*
This seems to happen to me at random while reading different images. The error is as follows:
``…
-
Hi, I'm trying to do some offline meshing with my own rgb and depth images. I converted the raw rgb and depth streams from Kinect v2 to png images. I saw that Infinitam works with pgm and ppm files, s…
-
### 🐛 Describe the bug
Using this code with the latest version
```
for image in tqdm(images):
images_predictions = model.predict(image, iou=0.5, conf=0.4, class_agnostic_nms=True)
class…
-
## Describe the current behavior in detail
Checked on [wormhole-connect-mainnet](https://wormhole-connect-mainnet.netlify.app/?config=N4KABGCmB2BuBcYDkBbAhgS2tSAXJANOGAK4DOkASpACaRkYDm0iuATiZERLgPYDW…
-
Dear @yijingru
According to [docs](https://pytorch.org/vision/stable/models.html), all the resnet models were trained on RGB (normalised) images.
> All pre-trained models expect input images no…
batic updated
3 years ago