Closed jweihe closed 2 years ago
Hi,
We use a modified clip model function as provided in vision_model/clip_model.py
to return intermediate features. Installing the CLIP library following the steps in docs/setup.sh
should prevent the above error.
git clone https://github.com/openai/CLIP.git
cp vision-aided-gan/vision_model/clip_model.py CLIP/clip/model.py
cd CLIP
python setup.py install
Let me know if this doesn't work.
i got this when run train.py. class CLIP(torch.nn.Module): ''' if 'conv_multi_level' in self.cv_type:
image_features = self.model(image.type(self.model.conv1.weight.dtype))