-
Hello, could you please provide ViT-B-16's weights pretrained on CIFAR-100 dataset? Many thanks!
-
Hi,
I am using your existing code and setting to run ssm_2d on CIFAR-10,
but the top-1 accuracy is low.
`best top-1: 57.20, final top-1: 56.30`
I'm running `python main.py --model vit --data…
-
I read your neural network diffusion paper and it was great and had a lot to say about us. I would like to try to reproduce them and have reproduced the case of resnet18 so far. I see that your paper …
-
![image](https://user-images.githubusercontent.com/54800294/181519458-f727d88e-031b-4ba2-a761-f596b0667542.png)
Hello! Your job is very nice,but when I train the model using your code,the validate ac…
-
## Evaluating Smooth model
I am trying to reproduce the results mentioned in Table 6 of the paper. The ablation size is fixed to b=4. The accuracies are evaluated on adversarial patch sizes 2x2 and …
-
## 🐛 Bug
We're trying to privately fine-tune a ViT B/16 model ([link](https://github.com/mlfoundations/open_clip/tree/main)) with CIFAR-10 data. The non-private version uses `MultiHeadAttention` wh…
-
/kind bug
**What steps did you take and what happened:**
I've created an InferenceService from PyTorch model with torchserve-kfs runtime version 0.7.0 (see .yaml file below). Then I've made a re…
-
Hello, I am doing the task of mae transfering to cifar, and I find that MAE is no better than supervised pre-trained vit on small data sets. Could you give me some advice?
-
Hi,
First off, thank you for the interesting work!
I'm conducting experiments with CIFAR-10 and noticed references to both a "source VIDA model" and a "source model." Could you kindly explain th…
-
Thanks for your great job!
I am now working hard to reproduce good results on small datasets such as CIFAR-10, CIAFR-20, and a subset of ImageNet, and also with some small backbone e.g. vit-small.
A…