Closed YanhaoWu closed 2 years ago
I also noticed the same behavior in mine, this is because of pytorch lightning accelerator for the parallel processing. You can maybe try using a different accelerator here
I also noticed the same behavior in mine, this is because of pytorch lightning accelerator for the parallel processing. You can maybe try using a different accelerator here
hello, I found that, if I change the code
self.model_q = model(in_channels=4 if args.use_intensity else 3, out_channels=latent_features[args.sparse_model]).type(dtype)
self.head_q = model_head(in_channels=latent_features[args.sparse_model], out_channels=args.feature_size).type(dtype)
self.model_k = model(in_channels=4 if args.use_intensity else 3, out_channels=latent_features[args.sparse_model]).type(dtype)
self.head_k = model_head(in_channels=latent_features[args.sparse_model], out_channels=args.feature_size).type(dtype)
to
self.model_q = model(in_channels=4 if args.use_intensity else 3, out_channels=latent_features[args.sparse_model])
self.head_q = model_head(in_channels=latent_features[args.sparse_model], out_channels=args.feature_size)
self.model_k = model(in_channels=4 if args.use_intensity else 3, out_channels=latent_features[args.sparse_model])
self.head_k = model_head(in_channels=latent_features[args.sparse_model], out_channels=args.feature_size)
the GPU load is balance, just like
I am trying to train the model with the code:
but I found that GPUs have different loads,just like
this prevents me from setting bigger batch_size to speed up my training, is there any way to sovle this?
I am looking forward to your reply!Thank you !