wusize / CLIPSelf

[ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction
https://arxiv.org/abs/2310.01403
Other
149 stars 8 forks source link

ValueError: assignment destination is read-only #17

Open SuleBai opened 3 months ago

SuleBai commented 3 months ago

Hi, thanks for your great work.

When I try to reproduce the results using command below

bash scripts/train_clipself_coco_image_patches_eva_vitl14.sh

I ran into the error ValueError: assignment destination is read-only, and refers to this line https://github.com/wusize/CLIPSelf/blob/1c7fe9c5c38d800903d4754a89d7a8fcc7977101/src/training/data.py#L370

Is there any bug in this code or what should I do to avoid this bug?

Thanks.

wusize commented 3 months ago

Hi! Please take care of the version of Pillow. I am using Pillow==9.1.0

SuleBai commented 3 months ago

Thanks for your quick reply and this has solved my problem.

And I have another question what should I do if I want to train the openai clip or the clip variants provided by the openclip using this script?

bash scripts/train_clipself_coco_image_patches_eva_vitl14.sh

I think I should modify the config --model, --pretrained, --embed-path, am I right? And what should I do to produce the --embed-path npy file for the model I use?

Thanks again.

JuanJia commented 2 months ago

You can try this np_old_image = np.asarray(old_image.copy()).copy()