mlfoundations / open_clip

An open source implementation of CLIP.
Other
9.29k stars 923 forks source link

lr setting #787

Closed mactavish91 closed 6 months ago

mactavish91 commented 6 months ago

Hello, may I ask if the learning rates of different layers of vit and text encoders are the same during pre training?

rwightman commented 6 months ago

@mactavish91 in default training, they are the same, EVA CLIP for example was a fork of this that changed LR across the layers in phases of their training