Closed ElsebaiyMohamed closed 9 months ago
Hi @ElsebaiyMohamed, thanks for raising this issue and providing details on the error + a snippet. Could you also provide information about the running environment: run transformers-cli env
in the terminal and copy-paste the output?
Hi @amyeroberts , Apologies for the delayed response! 🙏 Life threw a curveball, but I'm back on track. Thanks for your patience!
Regarding your request, here's the output of transformers-cli env
:
transformers version: 4.36.0
Platform: Linux-5.15.133+-x86_64-with-glibc2.35
Python version: 3.10.12
Huggingface_hub version: 0.19.4
Safetensors version: 0.4.1
Accelerate version: 0.25.0
Accelerate config: not found
PyTorch version (GPU?): 2.0.0 (True)
Tensorflow version (GPU?): 2.13.0 (True)
Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
Jax version: 0.4.21
JaxLib version: 0.4.21
Using GPU in script?: yes
Using distributed or parallel set-up in script?: no
Let me know if there's anything else I can help you with.
@ElsebaiyMohamed Great - thanks for providing this info!
cc @sanchit-gandhi @ylacombe
@sanchit-gandhi I use WhisperForAudioClassification task and want to use
use_weighted_layer_sum=True
, but there is a problem when call forward, the encoder part can return tuple or dict ifreturn_dict=True
but the code for useuse_weighted_layer_sum=True
assume the return to be tuple only and this line raise errorhidden_states = torch.stack(encoder_outputs, dim=1)
if the encoder return dict, there are workaround by usingreturn_dict=False
but when use the model later withpipeline
it will raise error because it assume the model to return dict not tuple. Link to code with the problemReproduce error