Closed PlekhanovaElena closed 2 months ago
Are you using the default.yaml
from here: https://github.com/microsoft/satclip/blob/main/satclip/configs/default.yaml?
You'd need to change in_channels
to 13, then it should run. If you want to use a pretrained vision encoder you need to change vision_layer
to e.g. moco_resnet50
.
For more details on the vision encoders and how they are used check: https://github.com/microsoft/satclip/blob/main/satclip/model.py
Thanks for your reply!
Changing parameters in default.yaml
didn't help, but changing them in the class SatCLIPLightningModule of main.py script actually did.
So it's running now, thank you!
Great!
Hi there,
I'm trying to reproduce the pre-training of the SatClip based on S100 datset. I downloaded S100 and changed the paths in the config file default.yaml and in s2geo_dataset.py. Now, this is the output error that I'm trying to solve:
It seems the images are found, but somehow the CNN expects 4 channels and get 13, and I'm not sure why. I tried to change in the ./satclip/configs/default.yaml the row
in_channels: 4
toin_channels: 13
, but this did not help. Also the file "/data/eplekh/code/satclip/lightning_logs/version_14077673/./configs/default-latest.yaml" that is created while running the script containsin_channels: 4
despite I changed the ./satclip/configs/default.yaml. This might be the reason, but I don't know how to fix it.Small additional question: is "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]" okay output or does it mean that it does not see the GPU?
Would very much appreciate any help, Kind rehards, Elena