-
```
import torch
import torch.fx
import torchvision.models as models
rn18 = models.resnet18()
# comment this out to do train mode
rn18.eval()
inp = torch.randn(5, 3, 224, 224)
bn_input = r…
-
Hi,
First of all thanks for sharing your work, the results and idea are very good and I am trying to use your work in my own dataset. In contrast to your paper, my dataset has only 2 classes ( 0 an…
-
I'm trying to explore the usage the AutoAlbument for semantic segmentation task with default generated search.yaml.
The custom dataset has around 29000 RGB images and corresponding masks (height x wi…
-
Something like...
```python
@chika.config
class Config:
optim: SGDConfig | AdamConfig
```
```commandline
python main.py --optim sgd
# lr=1e-1
python main.py --optim adam
# lr=1e-3
`…
-
我把代码中的phi改成了101代表的数字,并且也在model_data中放入了resnet101的模型,请问这个是什么原因啊,我还需要改哪里吗
-
Thank you for great job. I want to train on single GPU. Can i do that ?
-
Execllent work!
I have noticed that you adopted FrozenBatchNorm2d in your code. You mentioned it in your code adopting this method to prevent any other models than resnets producing nans. But you app…
-
Is there an easy way to select the sizes of the filters in the Unet-resnet?
It looks like it starts at 64, and then goes through the U with the lowest at 512. Can I change this?
It is ok if the d…
-
Your paper is pretty good !!
1. In your paper, I note that some components are fixed and some components are trainable. However, I haven't found any code in your project which represents training …
-
Hi,
Thanks for the great work!!
The paper states that during part-1 training (i.e. CLIP-based Contrastive Latent Representation Learning step) you consider image, text and audio modalities. But th…