Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.17k stars 3.37k forks source link

attribute 'model' removed from hparams because it cannot be pickled #19573

Closed VisionH222 closed 7 months ago

VisionH222 commented 7 months ago

Bug description

D:\Anaconda\envs\VH\lib\site-packages\pytorch_lightning\utilities\parsing.py:104: UserWarning: attribute 'model' removed from hparams because it cannot be pickled rank_zero_warn(f"attribute '{k}' removed from hparams because it cannot be pickled") D:\Anaconda\envs\VH\lib\site-packages\pytorch_lightning\utilities\parsing.py:104: UserWarning: attribute 'loss_fn' removed from hparams because it cannot be pickled rank_zero_warn(f"attribute '{k}' removed from hparams because it cannot be pickled")

What version are you seeing the problem on?

v1.8

How to reproduce the bug

No response

Error messages and logs

# Error messages and logs here please

Environment

Current environment ``` #- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): #- PyTorch Lightning Version (e.g., 1.5.0): #- Lightning App Version (e.g., 0.5.2): #- PyTorch Version (e.g., 2.0): #- Python version (e.g., 3.9): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): #- Running environment of LightningApp (e.g. local, cloud): ```

More info

No response

awaelchli commented 7 months ago

@VisionH222 I cannot tell exactly what is happening because you didn't post the code. But this is a warning and must mean you have passed an unpicklable argument to your LightningModule, and this simply can't be saved using save_hyperparameters(). If you want to avoid the warning, set the ignore parameter:

class MyLightningModule(LightningModule):
    def __init__(self, loss_fn):
        super().__init__()
        self.save_hyperparameters(ignore=["loss_fn"])

I sent a PR to improve the warning with this hint.

VisionH222 commented 7 months ago

sorry, I didn't provide spefic information when asking questions. The specific code is as fllows: class LitLungTumorSegModel(pl.LightningModule):     def init(self, model, loss_fn, num_classes=2, learning_rate=1e-4, lr_scheduler_patience=5,                  lr_scheduler_threshold=1e-5):         super().init()         self.save_hyperparameters()         self.model = model         self.loss_fn = loss_fn         self.train_iou = torchmetrics.IoU(num_classes)         self.validation_iou = torchmetrics.IoU(num_classes)

segnet = SegNet(encoder_channels, decoder_channels, input_args.num_classes, input_args.warm_start) loss_fn = torch.nn.CrossEntropyLoss(weight=torch.Tensor([neg_weight, pos_weight])) model = LitLungTumorSegModel(segnet, loss_fn, input_args.num_classes, input_args.learning_rate, input_args.lr_scheduler_patience,                                  input_args.lr_scheduler_threshold)

But if I ignore the parameter model and loss_fn, I can't load form the checkpoint when reusing my trained model.  

? ? ? @.***

 

------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2024年3月6日(星期三) 晚上8:57 收件人: @.>; 抄送: "🌸 🌸 @.>; @.>; 主题: Re: [Lightning-AI/pytorch-lightning] attribute 'model' removed from hparams because it cannot be pickled (Issue #19573)

@VisionH222 I cannot tell exactly what is happening because you didn't post the code. But this is a warning and must mean you have passed an unpicklable argument to your LightningModule, and this simply can't be saved using save_hyperparameters(). If you want to avoid the warning, set the ignore parameter: class MyLightningModule(LightningModule): def init(self, loss_fn): super().init() self.save_hyperparameters(ignore=["loss_fn"])

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

awaelchli commented 7 months ago

Yes, if you want to load from the checkpoint, you will have to pass the missing argument:

LitLungTumorSegModel.load_from_checkpoint(path, loss_fn=...)

If you want to load it for inference where you don't need the loss function, maybe make your loss_fn argument optional.

VisionH222 commented 7 months ago

Oh, I got it, thanks! I want to load the trained model for inference, so I can make the loss_fn argument optional. But how about the 'model' hparams. I have tried to pass the missing argument, but it can't work. Can you give me some advices? Thank you very much! The specific code and error information are as follows:encoder_channels = (3, 64, 128, 256, 512, 512) decoder_channels = (512, 512, 256, 128, 64, 64) segnet = SegNet(encoder_channels, decoder_channels, 2, True) loss_fn = torch.nn.CrossEntropyLoss(weight=torch.Tensor([0.1046, 0.8954])) model = LitLungTumorSegModel(segnet, loss_fn, 2) model_t = LitLungTumorSegModel.load_from_checkpoint(path_to_ckpt, model=model, loss_fn=loss_fn) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model_t.eval().to(device)infer model_t = LitLungTumorSegModel.load_from_checkpoint(path_to_ckpt, model=model, loss_fn=loss_fn) File "D:\Anaconda\envs\VH\lib\site-packages\pytorch_lightning\core\saving.py", line 156, in load_from_checkpoint model = cls._load_model_state(checkpoint, strict=strict, **kwargs) File "D:\Anaconda\envs\VH\lib\site-packages\pytorch_lightning\core\saving.py", line 204, in _load_model_state keys = model.load_state_dict(checkpoint["state_dict"], strict=strict) File "D:\Anaconda\envs\VH\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LitLungTumorSegModel: Missing key(s) in state_dict: "model.model.encoder.encoder_blocks.0.first_block.block.0.weight", "model.model.encoder.encoder_blocks.0.first_block.block.0.bias", "model.model.encoder.encoder_blocks.0.first_block.block.1.weight", "model.model.encoder.encoder_blocks.0.first_block.block.1.bias", "model.model.encoder.encoder_blocks.0.first_block.block.1.running_mean", "model.model.encoder.encoder_blocks.0.first_block.block.1.running_var", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.0.weight", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.0.bias", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.weight", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.bias", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.running_mean", "model.model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.running_var", "model.model.encoder.encoder_blocks.1.first_block.block.0.weight", "model.model.encoder.encoder_blocks.1.first_block.block.0.bias", "model.model.encoder.encoder_blocks.1.first_block.block.1.weight", "model.model.encoder.encoder_blocks.1.first_block.block.1.bias", "model.model.encoder.encoder_blocks.1.first_block.block.1.running_mean", "model.model.encoder.encoder_blocks.1.first_block.block.1.running_var", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.0.weight", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.0.bias", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.weight", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.bias", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.running_mean", "model.model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.running_var", "model.model.encoder.encoder_blocks.2.first_block.block.0.weight", "model.model.encoder.encoder_blocks.2.first_block.block.0.bias", "model.model.encoder.encoder_blocks.2.first_block.block.1.weight", "model.model.encoder.encoder_blocks.2.first_block.block.1.bias", "model.model.encoder.encoder_blocks.2.first_block.block.1.running_mean", "model.model.encoder.encoder_blocks.2.first_block.block.1.running_var", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.0.weight", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.0.bias", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.weight", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.bias", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.running_mean", "model.model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.running_var", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.0.weight", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.0.bias", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.weight", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.bias", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.running_mean", "model.model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.running_var", "model.model.encoder.encoder_blocks.3.first_block.block.0.weight", "model.model.encoder.encoder_blocks.3.first_block.block.0.bias", "model.model.encoder.encoder_blocks.3.first_block.block.1.weight", "model.model.encoder.encoder_blocks.3.first_block.block.1.bias", "model.model.encoder.encoder_blocks.3.first_block.block.1.running_mean", "model.model.encoder.encoder_blocks.3.first_block.block.1.running_var", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.0.weight", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.0.bias", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.weight", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.bias", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.running_mean", "model.model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.running_var", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.0.weight", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.0.bias", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.weight", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.bias", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.running_mean", "model.model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.running_var", "model.model.encoder.encoder_blocks.4.first_block.block.0.weight", "model.model.encoder.encoder_blocks.4.first_block.block.0.bias", "model.model.encoder.encoder_blocks.4.first_block.block.1.weight", "model.model.encoder.encoder_blocks.4.first_block.block.1.bias", "model.model.encoder.encoder_blocks.4.first_block.block.1.running_mean", "model.model.encoder.encoder_blocks.4.first_block.block.1.running_var", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.0.weight", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.0.bias", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.weight", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.bias", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.running_mean", "model.model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.running_var", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.0.weight", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.0.bias", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.weight", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.bias", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.running_mean", "model.model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.running_var", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.0.weight", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.0.bias", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.1.weight", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.1.bias", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.1.running_mean", "model.model.decoder.decoder_blocks.0.first_blocks.0.block.1.running_var", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.0.weight", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.0.bias", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.1.weight", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.1.bias", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.1.running_mean", "model.model.decoder.decoder_blocks.0.first_blocks.1.block.1.running_var", "model.model.decoder.decoder_blocks.0.last_block.block.0.weight", "model.model.decoder.decoder_blocks.0.last_block.block.0.bias", "model.model.decoder.decoder_blocks.0.last_block.block.1.weight", "model.model.decoder.decoder_blocks.0.last_block.block.1.bias", "model.model.decoder.decoder_blocks.0.last_block.block.1.running_mean", "model.model.decoder.decoder_blocks.0.last_block.block.1.running_var", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.0.weight", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.0.bias", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.1.weight", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.1.bias", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.1.running_mean", "model.model.decoder.decoder_blocks.1.first_blocks.0.block.1.running_var", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.0.weight", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.0.bias", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.1.weight", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.1.bias", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.1.running_mean", "model.model.decoder.decoder_blocks.1.first_blocks.1.block.1.running_var", "model.model.decoder.decoder_blocks.1.last_block.block.0.weight", "model.model.decoder.decoder_blocks.1.last_block.block.0.bias", "model.model.decoder.decoder_blocks.1.last_block.block.1.weight", "model.model.decoder.decoder_blocks.1.last_block.block.1.bias", "model.model.decoder.decoder_blocks.1.last_block.block.1.running_mean", "model.model.decoder.decoder_blocks.1.last_block.block.1.running_var", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.0.weight", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.0.bias", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.1.weight", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.1.bias", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.1.running_mean", "model.model.decoder.decoder_blocks.2.first_blocks.0.block.1.running_var", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.0.weight", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.0.bias", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.1.weight", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.1.bias", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.1.running_mean", "model.model.decoder.decoder_blocks.2.first_blocks.1.block.1.running_var", "model.model.decoder.decoder_blocks.2.last_block.block.0.weight", "model.model.decoder.decoder_blocks.2.last_block.block.0.bias", "model.model.decoder.decoder_blocks.2.last_block.block.1.weight", "model.model.decoder.decoder_blocks.2.last_block.block.1.bias", "model.model.decoder.decoder_blocks.2.last_block.block.1.running_mean", "model.model.decoder.decoder_blocks.2.last_block.block.1.running_var", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.0.weight", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.0.bias", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.1.weight", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.1.bias", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.1.running_mean", "model.model.decoder.decoder_blocks.3.first_blocks.0.block.1.running_var", "model.model.decoder.decoder_blocks.3.last_block.block.0.weight", "model.model.decoder.decoder_blocks.3.last_block.block.0.bias", "model.model.decoder.decoder_blocks.3.last_block.block.1.weight", "model.model.decoder.decoder_blocks.3.last_block.block.1.bias", "model.model.decoder.decoder_blocks.3.last_block.block.1.running_mean", "model.model.decoder.decoder_blocks.3.last_block.block.1.running_var", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.0.weight", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.0.bias", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.1.weight", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.1.bias", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.1.running_mean", "model.model.decoder.decoder_blocks.4.first_blocks.0.block.1.running_var", "model.model.decoder.decoder_blocks.4.last_block.block.0.weight", "model.model.decoder.decoder_blocks.4.last_block.block.0.bias", "model.model.decoder.decoder_blocks.4.last_block.block.1.weight", "model.model.decoder.decoder_blocks.4.last_block.block.1.bias", "model.model.decoder.decoder_blocks.4.last_block.block.1.running_mean", "model.model.decoder.decoder_blocks.4.last_block.block.1.running_var", "model.model.last.block.0.weight", "model.model.last.block.0.bias", "model.model.last.block.1.weight", "model.model.last.block.1.bias", "model.model.last.block.1.running_mean", "model.model.last.block.1.running_var", "model.model.output.0.weight", "model.model.output.0.bias", "model.model.output.1.weight", "model.model.output.1.bias", "model.model.output.1.running_mean", "model.model.output.1.running_var", "model.loss_fn.weight". Unexpected key(s) in state_dict: "model.encoder.encoder_blocks.0.first_block.block.0.weight", "model.encoder.encoder_blocks.0.first_block.block.0.bias", "model.encoder.encoder_blocks.0.first_block.block.1.weight", "model.encoder.encoder_blocks.0.first_block.block.1.bias", "model.encoder.encoder_blocks.0.first_block.block.1.running_mean", "model.encoder.encoder_blocks.0.first_block.block.1.running_var", "model.encoder.encoder_blocks.0.first_block.block.1.num_batches_tracked", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.0.weight", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.0.bias", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.weight", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.bias", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.running_mean", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.running_var", "model.encoder.encoder_blocks.0.remaining_blocks.0.block.1.num_batches_tracked", "model.encoder.encoder_blocks.1.first_block.block.0.weight", "model.encoder.encoder_blocks.1.first_block.block.0.bias", "model.encoder.encoder_blocks.1.first_block.block.1.weight", "model.encoder.encoder_blocks.1.first_block.block.1.bias", "model.encoder.encoder_blocks.1.first_block.block.1.running_mean", "model.encoder.encoder_blocks.1.first_block.block.1.running_var", "model.encoder.encoder_blocks.1.first_block.block.1.num_batches_tracked", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.0.weight", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.0.bias", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.weight", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.bias", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.running_mean", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.running_var", "model.encoder.encoder_blocks.1.remaining_blocks.0.block.1.num_batches_tracked", "model.encoder.encoder_blocks.2.first_block.block.0.weight", "model.encoder.encoder_blocks.2.first_block.block.0.bias", "model.encoder.encoder_blocks.2.first_block.block.1.weight", "model.encoder.encoder_blocks.2.first_block.block.1.bias", "model.encoder.encoder_blocks.2.first_block.block.1.running_mean", "model.encoder.encoder_blocks.2.first_block.block.1.running_var", "model.encoder.encoder_blocks.2.first_block.block.1.num_batches_tracked", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.0.weight", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.0.bias", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.weight", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.bias", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.running_mean", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.running_var", "model.encoder.encoder_blocks.2.remaining_blocks.0.block.1.num_batches_tracked", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.0.weight", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.0.bias", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.weight", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.bias", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.running_mean", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.running_var", "model.encoder.encoder_blocks.2.remaining_blocks.1.block.1.num_batches_tracked", "model.encoder.encoder_blocks.3.first_block.block.0.weight", "model.encoder.encoder_blocks.3.first_block.block.0.bias", "model.encoder.encoder_blocks.3.first_block.block.1.weight", "model.encoder.encoder_blocks.3.first_block.block.1.bias", "model.encoder.encoder_blocks.3.first_block.block.1.running_mean", "model.encoder.encoder_blocks.3.first_block.block.1.running_var", "model.encoder.encoder_blocks.3.first_block.block.1.num_batches_tracked", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.0.weight", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.0.bias", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.weight", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.bias", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.running_mean", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.running_var", "model.encoder.encoder_blocks.3.remaining_blocks.0.block.1.num_batches_tracked", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.0.weight", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.0.bias", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.weight", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.bias", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.running_mean", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.running_var", "model.encoder.encoder_blocks.3.remaining_blocks.1.block.1.num_batches_tracked", "model.encoder.encoder_blocks.4.first_block.block.0.weight", "model.encoder.encoder_blocks.4.first_block.block.0.bias", "model.encoder.encoder_blocks.4.first_block.block.1.weight", "model.encoder.encoder_blocks.4.first_block.block.1.bias", "model.encoder.encoder_blocks.4.first_block.block.1.running_mean", "model.encoder.encoder_blocks.4.first_block.block.1.running_var", "model.encoder.encoder_blocks.4.first_block.block.1.num_batches_tracked", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.0.weight", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.0.bias", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.weight", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.bias", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.running_mean", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.running_var", "model.encoder.encoder_blocks.4.remaining_blocks.0.block.1.num_batches_tracked", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.0.weight", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.0.bias", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.weight", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.bias", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.running_mean", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.running_var", "model.encoder.encoder_blocks.4.remaining_blocks.1.block.1.num_batches_tracked", "model.decoder.decoder_blocks.0.first_blocks.0.block.0.weight", "model.decoder.decoder_blocks.0.first_blocks.0.block.0.bias", "model.decoder.decoder_blocks.0.first_blocks.0.block.1.weight", "model.decoder.decoder_blocks.0.first_blocks.0.block.1.bias", "model.decoder.decoder_blocks.0.first_blocks.0.block.1.running_mean", "model.decoder.decoder_blocks.0.first_blocks.0.block.1.running_var", "model.decoder.decoder_blocks.0.first_blocks.0.block.1.num_batches_tracked", "model.decoder.decoder_blocks.0.first_blocks.1.block.0.weight", "model.decoder.decoder_blocks.0.first_blocks.1.block.0.bias", "model.decoder.decoder_blocks.0.first_blocks.1.block.1.weight", "model.decoder.decoder_blocks.0.first_blocks.1.block.1.bias", "model.decoder.decoder_blocks.0.first_blocks.1.block.1.running_mean", "model.decoder.decoder_blocks.0.first_blocks.1.block.1.running_var", "model.decoder.decoder_blocks.0.first_blocks.1.block.1.num_batches_tracked", "model.decoder.decoder_blocks.0.last_block.block.0.weight", "model.decoder.decoder_blocks.0.last_block.block.0.bias", "model.decoder.decoder_blocks.0.last_block.block.1.weight", "model.decoder.decoder_blocks.0.last_block.block.1.bias", "model.decoder.decoder_blocks.0.last_block.block.1.running_mean", "model.decoder.decoder_blocks.0.last_block.block.1.running_var", "model.decoder.decoder_blocks.0.last_block.block.1.num_batches_tracked", "model.decoder.decoder_blocks.1.first_blocks.0.block.0.weight", "model.decoder.decoder_blocks.1.first_blocks.0.block.0.bias", "model.decoder.decoder_blocks.1.first_blocks.0.block.1.weight", "model.decoder.decoder_blocks.1.first_blocks.0.block.1.bias", "model.decoder.decoder_blocks.1.first_blocks.0.block.1.running_mean", "model.decoder.decoder_blocks.1.first_blocks.0.block.1.running_var", "model.decoder.decoder_blocks.1.first_blocks.0.block.1.num_batches_tracked", "model.decoder.decoder_blocks.1.first_blocks.1.block.0.weight", "model.decoder.decoder_blocks.1.first_blocks.1.block.0.bias", "model.decoder.decoder_blocks.1.first_blocks.1.block.1.weight", "model.decoder.decoder_blocks.1.first_blocks.1.block.1.bias", "model.decoder.decoder_blocks.1.first_blocks.1.block.1.running_mean", "model.decoder.decoder_blocks.1.first_blocks.1.block.1.running_var", "model.decoder.decoder_blocks.1.first_blocks.1.block.1.num_batches_tracked", "model.decoder.decoder_blocks.1.last_block.block.0.weight", "model.decoder.decoder_blocks.1.last_block.block.0.bias", "model.decoder.decoder_blocks.1.last_block.block.1.weight", "model.decoder.decoder_blocks.1.last_block.block.1.bias", "model.decoder.decoder_blocks.1.last_block.block.1.running_mean", "model.decoder.decoder_blocks.1.last_block.block.1.running_var", "model.decoder.decoder_blocks.1.last_block.block.1.num_batches_tracked", "model.decoder.decoder_blocks.2.first_blocks.0.block.0.weight", "model.decoder.decoder_blocks.2.first_blocks.0.block.0.bias", "model.decoder.decoder_blocks.2.first_blocks.0.block.1.weight", "model.decoder.decoder_blocks.2.first_blocks.0.block.1.bias", "model.decoder.decoder_blocks.2.first_blocks.0.block.1.running_mean", "model.decoder.decoder_blocks.2.first_blocks.0.block.1.running_var", "model.decoder.decoder_blocks.2.first_blocks.0.block.1.num_batches_tracked", "model.decoder.decoder_blocks.2.first_blocks.1.block.0.weight", "model.decoder.decoder_blocks.2.first_blocks.1.block.0.bias", "model.decoder.decoder_blocks.2.first_blocks.1.block.1.weight", "model.decoder.decoder_blocks.2.first_blocks.1.block.1.bias", "model.decoder.decoder_blocks.2.first_blocks.1.block.1.running_mean", "model.decoder.decoder_blocks.2.first_blocks.1.block.1.running_var", "model.decoder.decoder_blocks.2.first_blocks.1.block.1.num_batches_tracked", "model.decoder.decoder_blocks.2.last_block.block.0.weight", "model.decoder.decoder_blocks.2.last_block.block.0.bias", "model.decoder.decoder_blocks.2.last_block.block.1.weight", "model.decoder.decoder_blocks.2.last_block.block.1.bias", "model.decoder.decoder_blocks.2.last_block.block.1.running_mean", "model.decoder.decoder_blocks.2.last_block.block.1.running_var", "model.decoder.decoder_blocks.2.last_block.block.1.num_batches_tracked", "model.decoder.decoder_blocks.3.first_blocks.0.block.0.weight", "model.decoder.decoder_blocks.3.first_blocks.0.block.0.bias", "model.decoder.decoder_blocks.3.first_blocks.0.block.1.weight", "model.decoder.decoder_blocks.3.first_blocks.0.block.1.bias", "model.decoder.decoder_blocks.3.first_blocks.0.block.1.running_mean", "model.decoder.decoder_blocks.3.first_blocks.0.block.1.running_var", "model.decoder.decoder_blocks.3.first_blocks.0.block.1.num_batches_tracked", "model.decoder.decoder_blocks.3.last_block.block.0.weight", "model.decoder.decoder_blocks.3.last_block.block.0.bias", "model.decoder.decoder_blocks.3.last_block.block.1.weight", "model.decoder.decoder_blocks.3.last_block.block.1.bias", "model.decoder.decoder_blocks.3.last_block.block.1.running_mean", "model.decoder.decoder_blocks.3.last_block.block.1.running_var", "model.decoder.decoder_blocks.3.last_block.block.1.num_batches_tracked", "model.decoder.decoder_blocks.4.first_blocks.0.block.0.weight", "model.decoder.decoder_blocks.4.first_blocks.0.block.0.bias", "model.decoder.decoder_blocks.4.first_blocks.0.block.1.weight", "model.decoder.decoder_blocks.4.first_blocks.0.block.1.bias", "model.decoder.decoder_blocks.4.first_blocks.0.block.1.running_mean", "model.decoder.decoder_blocks.4.first_blocks.0.block.1.running_var", "model.decoder.decoder_blocks.4.first_blocks.0.block.1.num_batches_tracked", "model.decoder.decoder_blocks.4.last_block.block.0.weight", "model.decoder.decoder_blocks.4.last_block.block.0.bias", "model.decoder.decoder_blocks.4.last_block.block.1.weight", "model.decoder.decoder_blocks.4.last_block.block.1.bias", "model.decoder.decoder_blocks.4.last_block.block.1.running_mean", "model.decoder.decoder_blocks.4.last_block.block.1.running_var", "model.decoder.decoder_blocks.4.last_block.block.1.num_batches_tracked", "model.last.block.0.weight", "model.last.block.0.bias", "model.last.block.1.weight", "model.last.block.1.bias", "model.last.block.1.running_mean", "model.last.block.1.running_var", "model.last.block.1.num_batches_tracked", "model.output.0.weight", "model.output.0.bias", "model.output.1.weight", "model.output.1.bias", "model.output.1.running_mean", "model.output.1.running_var", "model.output.1.num_batches_tracked".

 

? ? ? @.***

 

------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2024年3月8日(星期五) 晚上8:58 收件人: @.>; 抄送: "🌸 🌸 @.>; @.>; 主题: Re: [Lightning-AI/pytorch-lightning] attribute 'model' removed from hparams because it cannot be pickled (Issue #19573)

Yes, if you want to load from the checkpoint, you will have to pass the missing argument: LitLungTumorSegModel.load_from_checkpoint(path, loss_fn=...)
If you want to load it for inference where you don't need the loss function, maybe make your loss_fn argument optional.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>