NVlabs / MambaVision

Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
https://arxiv.org/abs/2407.08083
Other
704 stars 40 forks source link

Only got one output from the model. #28

Closed vstar37 closed 4 weeks ago

vstar37 commented 1 month ago

I want to use mambaVision as backbone for a segmentation task. backbone = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-B-1K", trust_remote_code=True)

I thought the Len(depth) = 4 as default? why I can only get one tensor as return? output = self.backbone(x) print("length from backbone:", len(output)) x1, x2, x3, x4 = self.backbone(x)

I don't know how "AutoModelForImageClassification.from_pretrained" really work, but I find this in the class MambaVision, and make some tweak cuz I hope to get the features only. `def forward_features(self, x): x = self.patch_embed(x) outs = [] for level in self.levels: x, xo = level(x) outs.append(xo) x = self.norm(x) x = self.avgpool(x) x = torch.flatten(x, 1) return x, outs

def forward(self, x):
    #x, outs = self.forward_features(x)
    #x = self.head(x)
    _, outs = self.forward_features(x).   
    return outs`

still not work :(

ahatamiz commented 1 month ago

Hi @vstar37

Please use AutoModel.from_pretrained for downstream tasks. The AutoModelForImageClassification.from_pretrained is just for classification.

Here's the full snippet:

from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests

model = AutoModel.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

# eval mode for inference
model.cuda().eval()

# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)  # MambaVision supports any input resolutions

transform = create_transform(input_size=input_resolution,
                             is_training=False,
                             mean=model.config.mean,
                             std=model.config.std,
                             crop_mode=model.config.crop_mode,
                             crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size())  # torch.Size([1, 640])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
vstar37 commented 4 weeks ago

Thanks for your help!