Open Seun-Ajayi opened 4 months ago
Implementing clip. See CLIPTextTransformer, line 449 and 455 both require _create_4d_causal_attention_mask and _prepare_4d_attention_mask respectively . You can find the implementation of this functions and the class the depend on here on transformers - https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.py
CLIPTextTransformer
449
455
_create_4d_causal_attention_mask
_prepare_4d_attention_mask
What types of changes does your code introduce? Put an x in the boxes that apply
x
Proposed changes
Implementing clip. See
CLIPTextTransformer
, line449
and455
both require_create_4d_causal_attention_mask
and_prepare_4d_attention_mask
respectively . You can find the implementation of this functions and the class the depend on here on transformers - https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.pyTypes of changes
What types of changes does your code introduce? Put an
x
in the boxes that apply