/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:617: UserWarning: Checkpoint directory logs/anomaly-checkpoints/checkpoints exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
Validation sanity check: 0it [00:00, ?it/s]/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:440: UserWarning: Your val_dataloader has shuffle=True,it is strongly recommended that you turn this off for val/test/predict dataloaders.
rank_zero_warn(
/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:110: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 24 which is the number of cpus on this machine) in theDataLoader` init to improve performance.
rank_zero_warn(
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
Avoid using tokenizers before the fork if possible
Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the batch_size from an ambiguous collection. The batch size we found is 28. To avoid any miscalculations, use self.log(..., batch_size=batch_size).
warning_cache.warn(
Summoning checkpoint.
rank0: Traceback (most recent call last):
rank0: File "main.py", line 874, in rank0: trainer.fit(model, data)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in fit
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt
rank0: return trainer_fn(*args, **kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in _fit_impl
rank0: self._run(model, ckpt_path=ckpt_path)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in _run
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1272, in _dispatch
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
rank0: self._results = trainer.run_stage()
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1282, in run_stage
rank0: return self._run_train()
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1304, in _run_train
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
rank0: self.advance(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 109, in advance
rank0: dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
rank0: self.advance(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 123, in advance
rank0: output = self._evaluation_step(batch, batch_idx, dataloader_idx)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 215, in _evaluation_step
rank0: output = self.trainer.accelerator.validation_step(step_kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 236, in validation_step
rank0: return self.training_type_plugin.validation_step(step_kwargs.values())
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 446, in validation_step
rank0: return self.model(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1593, in forward
rank0: else self._run_ddp_forward(*inputs, *kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1411, in _run_ddp_forward
rank0: return self.module(inputs, kwargs) # type: ignoreindex: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(*args, *kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 92, in forward
rank0: output = self.module.validation_step(inputs, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
rank0: return func(args, kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 368, in validationstep
rank0: , loss_dict_no_ema = self.shared_step(batch)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 1018, in shared_step
rank0: loss = self(x, c,total_dict)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forwardcall(*args, **kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 1026, in forward
rank0: c, = self.get_learned_conditioning(c,x=mask_cond,name=name)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 611, in get_learned_conditioning
rank0: c,position = self.cond_stage_model.encode(c, cond_img=x,embedding_manager=self.embedding_manager,name=name)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/encoders/modules.py", line 130, in encode
rank0: return self(text,*kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(args, kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/encoders/modules.py", line 122, in forward
rank0: z,position = self.transformer(tokens, return_embeddings=True, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(args, kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/x_transformer.py", line 615, in forward
rank0: x,position = embedding_manager(x, embedded_x,kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(*args, kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/embedding_manager2.py", line 138, in forward
rank0: placeholder_embedding2 = self.spatial_encoder_model(img)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(*args, *kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(args, kwargs)
rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/psp_encoder/encoders/psp_encoders.py", line 81, in forward
rank0: x = self.input_layer(x)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(*args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(*args, *kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward
rank0: input = module(input)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
rank0: return self._call_impl(args, kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
rank0: return forward_call(*args, **kwargs)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 460, in forward
rank0: return self._conv_forward(input, self.weight, self.bias)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
rank0: return F.conv2d(input, weight, bias, self.stride,
rank0: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 256, 256, 256] to have 3 channels, but got 256 channels instead
Hello, what is the cause of this and what should I do
/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:617: UserWarning: Checkpoint directory logs/anomaly-checkpoints/checkpoints exists and is not empty. rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.") Validation sanity check: 0it [00:00, ?it/s]/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:440: UserWarning: Your
val_dataloader
hasshuffle=True
,it is strongly recommended that you turn this off for val/test/predict dataloaders. rank_zero_warn( /home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:110: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of thenum_workers
argument(try 24 which is the number of cpus on this machine) in the
DataLoader` init to improve performance. rank_zero_warn( Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either:tokenizers
before the fork if possiblebatch_size
from an ambiguous collection. The batch size we found is 28. To avoid any miscalculations, useself.log(..., batch_size=batch_size)
. warning_cache.warn( Summoning checkpoint.rank0: Traceback (most recent call last): rank0: File "main.py", line 874, in
rank0: trainer.fit(model, data)
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in fit
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt rank0: return trainer_fn(*args, **kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in _fit_impl rank0: self._run(model, ckpt_path=ckpt_path) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in _run
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1272, in _dispatch
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training rank0: self._results = trainer.run_stage() rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1282, in run_stage rank0: return self._run_train() rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1304, in _run_train
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1368, in _run_sanity_check
rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run rank0: self.advance(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 109, in advance rank0: dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run rank0: self.advance(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 123, in advance rank0: output = self._evaluation_step(batch, batch_idx, dataloader_idx) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 215, in _evaluation_step rank0: output = self.trainer.accelerator.validation_step(step_kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 236, in validation_step rank0: return self.training_type_plugin.validation_step(step_kwargs.values()) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 446, in validation_step rank0: return self.model(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1593, in forward rank0: else self._run_ddp_forward(*inputs, *kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1411, in _run_ddp_forward rank0: return self.module(inputs, kwargs) # type: ignoreindex: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(*args, *kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 92, in forward rank0: output = self.module.validation_step(inputs, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context rank0: return func(args, kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 368, in validationstep rank0: , loss_dict_no_ema = self.shared_step(batch) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 1018, in shared_step rank0: loss = self(x, c,total_dict) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forwardcall(*args, **kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 1026, in forward rank0: c, = self.get_learned_conditioning(c,x=mask_cond,name=name) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/diffusion/ddpm.py", line 611, in get_learned_conditioning rank0: c,position = self.cond_stage_model.encode(c, cond_img=x,embedding_manager=self.embedding_manager,name=name) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/encoders/modules.py", line 130, in encode rank0: return self(text,*kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(args, kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/encoders/modules.py", line 122, in forward rank0: z,position = self.transformer(tokens, return_embeddings=True, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(args, kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/x_transformer.py", line 615, in forward rank0: x,position = embedding_manager(x, embedded_x,kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(*args, kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/modules/embedding_manager2.py", line 138, in forward rank0: placeholder_embedding2 = self.spatial_encoder_model(img) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(*args, *kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(args, kwargs) rank0: File "/home/dxy/Anomalydiffusion/anomalydiffusion/ldm/models/psp_encoder/encoders/psp_encoders.py", line 81, in forward rank0: x = self.input_layer(x) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(*args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(*args, *kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward rank0: input = module(input) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl rank0: return self._call_impl(args, kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl rank0: return forward_call(*args, **kwargs) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 460, in forward rank0: return self._conv_forward(input, self.weight, self.bias) rank0: File "/home/dxy/anaconda3/envs/Anomalydiffusion/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward rank0: return F.conv2d(input, weight, bias, self.stride, rank0: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 256, 256, 256] to have 3 channels, but got 256 channels instead Hello, what is the cause of this and what should I do