-
reference model is from: https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py
-
### 🚀 The feature
The original paper describes a few more configurations based on swin Transformer.
1. Swin Large: Simply a large model of swin transformer, needs a few config tweaks and we can po…
-
Hi, all.
I know this isn't your obligation, but just wanna post and see if any of you tried to do similar thing like me before.
I'm trying to use nnUNet framework with Swin-Unet, which is transfor…
-
### 🐛 Describe the bug
```python
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial res…
-
how to use swin transformer extract feature , face sdk does not support swin transformer
-
Hi,
Thanks a lot for your amazing work.
I have a question about the implementation of P3AFormer on KITTI. As shown in your paper, Swin-B is used as the backbone. How did you handle the resoluti…
-
My scripts:
```
import torch
from mmpretrain import get_model
import torch_pruning as tp
import torch.nn as nn
from typing import Sequence
from mmpretrain.models.utils import PatchMerging, Wind…
-
hi. Can I use Swin Transformer as a backbone instead of resnet50? If so, what changes should be made to the swin transformer(pretrained on imgnet22k) ?
-
Thanks for the great work.
I want to finetune swin-transformer with different resolution, such as 512 x 512.
If I only modify ```IMG_SIZE``` in config from 224 to 512, I will get error
```RuntimeEr…
-
### Feature request
I'm trying to export a pretrained OneFormer to ONNX. I know that optimum has not yet officially supported exporting OneFormer to ONNX. That was why I wrote my own export script. H…