Open yihp opened 3 weeks ago
I found the problem. The forward() of the class CvtWithProjectionHead in modelling_single.py should add the parameter output_attentions: Optional[bool] = None
, or do not pass this parameter.
class CvtWithProjectionHead(transformers.CvtPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.cvt = transformers.CvtModel(config, add_pooling_layer=False)
self.projection_head = CvtProjectionHead(config)
# Initialize weights and apply final processing:
self.post_init()
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
output_attentions: Optional[bool] = None,
) -> Union[Tuple, ModelOutput]:
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.cvt(
pixel_values,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
projection = self.projection_head(
torch.permute(torch.flatten(outputs.last_hidden_state, 2), [0, 2, 1]),
)
if not return_dict:
return projection
return ModelOutput(
last_hidden_state=projection,
)
Thanks for that, this started occurring in later transformers packages. I will fix this up. Did I fix this up on the Hugging Face Hub with aehrc/cxrmate-single-tf? I might have just missed it in this repo...
Hi! Thanks for your contribution.
When I trained config/train/single_tf.yaml: the following error occurred:
But I trained other models config/train/multi_tf.yaml without any problem. I did not find out the reason. I tried switching the transformer version, but it didn't work.