Closed Cestlaviez closed 2 years ago
Hey @Cestlaviez ! Yes, I know about this problem, but I do not know how to solve it. I am convinced that it is due to the following lines:
from clip_onnx import clip_onnx, attention
clip.model.ResidualAttentionBlock.attention = attention
The problem is that onnx doesn't want to export Multi-head attention layer. However, in most cases, the highest probability of the original corresponds to onnx.
if comment the line of "clip.model.ResidualAttentionBlock.attention = attention", it seems that onnx file could be exported successfully @Lednik7
@zhangnju can you give an example code with the model?
@Cestlaviez I updated the information in the readme, it should help
CLIP-ONNX version 1.2, results are the same
Hi, thanks for providing this useful tool! However, I found that the result produced by the generated ONNX model is inconsistent with the original CLIP model. Here is the code I used to test the original model:
The result is:
Label probs: [[0.9927937 0.00421069 0.00299573]]
However, when using the onnx model, the result is:
Label probs: [[0.41456965 0.29270944 0.29272085]]
.Could you help me with this? Thanks!