-
I'm trying to convert this model:
`model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)`
This is the code of the conversion:
```
model = torch.load('/path/to/model/tr…
-
请问mmsegmentation如何实现孪生网络,我想做遥感变化检测,计划将图像分别以单独的分支输入网络,而不是合并成一个文件再去输入。谢谢。
Please ask mmsegmentation how to implement siamese networks. I want to do remote sensing change detection and plan to feed the…
-
Is there a workaround for the `ResourceExhaustedError`?
That's what happen when I run `main.py` with a custom env:
```
Traceback (most recent call last):
File "main.py", line 125, in
m…
-
The output of network is a BoxList for each image. Use BoxList.bbox, I can get the output bbox, but I can't get the class of each bbox, can you help me?
```
print(predictions[0].bbox)
tensor([[110.…
-
According to the paper, the cropped faces of subject's video are taken as input to the video network. My question is about the preprocessing pipeline. Which method or tool do you use to crop faces? Ho…
-
In the second stage of the FGT network training, I found that batch_size was only set to 1 and only 5 frames per video were selected for training. Thus, the size of the input tensor is (b, t, c, h, w)…
-
Dear scTenifoldNet dev,
Thanks very much for the interesting method. That is the first time I learn something about tensor low approximation,
ane I would like to try it on my work.
As I am ne…
-
I would love to be able to incorporate your denoiser into a Deepstream/Gstreamer pipeline. In order to do this, I'd need to know how to get from raw audio data -> pre-processed network input tensor(s…
-
I'm using python3.5 and tf 1.2 and i get the follow error:
Tensor("Placeholder_2:0", shape=(?, 5), dtype=float32)
Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32)
(, , , , )
[, ]
…
-
Hi,
As known from the detectron2 deployment description, the detectron2 TorchScript scripting model supports dynamic batch_size. I am currently working on modifying the official example "[torchscri…