Closed RdoubleA closed 1 week ago
Note: Links to docs will display an error until the docs builds have been completed.
There are 1 currently active SEVs. If your PR is affected, please view them below:
As of commit 77ecb82a3ae5e5ab730e2f0898db4fa1b9985c18 with merge base bca5899480f54ebb85fea16231707ec36ee606ad (): :green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
Context
Two-dimensional rotary positional embeddings have been added to vision transformers to improve performance. This was explored in papers such as Eva-02-CLIP, which found that 2D RoPE improved performance and had more stable training as opposed to 1D RoPE for images. Another novel architecture FiT (Flexible Vision Transformer for Diffusion Model) similarly employs 2D RoPE for image resolution generalization. Pixtral, a multimodal LLM, also uses a similar 2D RoPE mechanism as well, as seen in Hugging Face. A full survey of 2D RoPE can be found in this paper.
Here, we add
VisionRotaryPositionalEmbeddings
as a general component. The forward is identical toRotaryPositionalEmbeddings
; the major difference is in thebuild_rope_cache
, where we need to take into account the x and y positions of the patches in the image grid, as defined bypatch_size
andtile_size
. It simply applies 1D RoPE with half the usual dim on x and y independently and concatenates them together.This is exposed as
use_rope
in theclip_vision_encoder
builder, which will enable 2D RoPE. I also include some various fixes for CLIP and additional configurabilityChangelog
What are the changes made in this PR?
VisionRotaryPositionalEmbeddings
with testsintermediate_act
is not used in theclip_vision_encoder
. This is superceded byactivation
, so I just removed itapply_lora_to_output
is not used inlora_clip_vision_encoder
. There is a TODO to add this to theCLSProjection
. I removed it so users don't accidentally configure it and get unexpected behaviorCLSEmbedding
adds the CLS token to the beginning of the input sequence. I expose this as a configurable parameter to allow the option of adding it to the end of the sequence instead, which some variations of CLIP may do.attn_bias
was not being propagated inlora_clip_vision_encoder
, so I fixed this and changed default to False to match current behavior.Test plan
clip_vision_encoder
withappend_cls_token=True
and manually verified that cls token embedding is in correct position, added testVisionRotaryPositionalEmbeddings
and parity check against an internal reference implementation