Closed rallen10 closed 2 years ago
Hi @rallen10, I'm guessing this relates to variables not supported by PL's save_hyperparameters
. Can you provide the config for both pl_module
and data_module
?
This doesn't look like it is a hydra-zen or Hydra issue -- I expect that if you initialize your PL module by hand that you will get the exact same error.
If you followed our PL How-To guide then what is likely going on is that you passed your optimizer into the __init__
of your LightningModule
. save_hyperparameters()
will attempt to save everything that was passed to you __init__
, but it can only handle primitive data types (like int
and bool
).
I believe you can circumvent this by specifying the specific names of the parameters you want to save; e.g. self.save_hyperparameters("layer_1_dim", "learning_rate")
. If you exclude the names of non-primitive fields, then this error should go away.
However, given that you are using hydra-zen, there isn't really a need to use save_hyperparameters()
anymore 😄 (Edit: @jgbos pointed out that save_hyperparameters()
can have some utility for logging parameters to tensorboad). As you can see here you can track all of your hyperparameters from your Hydra configs and load your LightningModule
from the yaml that gets serialized whenever you launch your job.
Yeah after I posted this, I realized that my __init__()
args is a torch.nn.Module
object and the optimizer, neither are primitive types.
This identifies my problem, so I will close this issue.
When I try to add
save_hyperparameters()
to my Lightning module init, I get the following error when trying to run hydra-zenIf I remove the
save_hyperparameters()
call, everything works fine