-
# Reference
- [ ] [paper - 2018 - Guided Upsampling Network for Real-Time Semantic Segmentation](https://arxiv.org/pdf/1807.07466v1)
-
Would love to get a solid TAA implementation as part of the core heaps.io.
Ideally integrated with ML upsampling, but that's a bonus.
-
## 🚀 Feature
This is a request to add pixel shuffle support in PyTorch Mobile NN-API, and also prioritize its implementation because of its importance for many computationally constrained mobile mo…
-
For 3D ConvNets,Tensor shape is 5D (batch_size,channel,depth,height,weight).
If we want to do something such as 3D Segmentation, bilinear_upsampling of 5D Tensor is needed.
-
Hi, in your HRNet, the prediction size of the output segmentation map is 1/4 of the raw image, then bilinear upsampling is adopted to generate the final segmentaiton map. I am wondering why not genera…
-
**With Clothes**
1.Learning to reconstruct people in clothing from a single rgb camera(2019)
code:https://github.com/thmoa (no training code) (same link to 1,2,3)
2.Multi-garmentnet: Learning to…
-
I'm a novice in coding and stuff) So I've made everythin' but... have this
What's that means?
Background upsampling: False, Face upsampling: False
[1/1] Processing: 8.jpg
[ WARN:0@9.075] global lo…
-
In the LearnTransform.Translation class of the newest starfish version, there's a segment of code where I think the parameters for 'reference_image' and 'moving_image' might have been swapped:
shif…
-
The current implementation of DS1 uses a hardcoded TAMANHO_DO_BUFFER of size 2048, and a total 8x upsampling (2x on the first stage, and 4x on the second stage).
This causes a segfault on sample_coun…
-
Hi,
Can you please tell me why do we need to upsampling to 224 in preprocess stage?
torch_img = F.upsample(torch_img, size=(224, 224), mode='bilinear', align_corners=False)
I used my own mode…