Open hardlipay opened 1 year ago
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
[](https://github.com/huggingface/pytorch-image-models/blob/0bc9a63ee5e432c85c8f87d7b116a0430ea3764a/timm/models/eva.py#L579C3-L582C17)
just to
```
def forward(self, x):
x_h = self.forward_features(x) # right?
x = self.forward_head(x_h)
return x,x_h
Here is the output returned by dinov2 in hg/transformer. Just encoder_outputs.output_hidden_states to get the [bs, 257, 768] feature tensor.
I am now using the openclip library for the eva02 model and I can't find a similar return. Please let me know what code I should use, thanks! 这里是dinov2在hg/transformer中返回的输出。 只要encoder_outputs.output_hidden_states就可以获取【bs,257,768】的特征张量。
我现在在使用openclip库的eva02模型,我找不到类似的返回。 请告诉我应该使用什么样的代码,谢谢