Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.53k stars 3.39k forks source link

How to config WandbLogger with LightingCLI ? #20064

Open real-junjiezhang opened 4 months ago

real-junjiezhang commented 4 months ago

Bug description

I want use wandb to track the change of gradients during training. However I don't know how to write this with LightingCLI. Can someone show me a template regarding how to use wandb to track gradients in config.yaml? Any support will be highly appreciated Thanks in advance.

What version are you seeing the problem on?

master

How to reproduce the bug

No response

Error messages and logs

# Error messages and logs here please

Environment

Current environment ``` #- PyTorch Lightning Version (e.g., 1.5.0): #- PyTorch Version (e.g., 2.0): #- Python version (e.g., 3.9): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ```

More info

No response

cc @carmocca @mauvilsa @awaelchli

awaelchli commented 4 months ago

I think you should be able to configure any object (be it a logger, or callback etc.) in the config file for LightningCLI/jsonargparse like so:

trainer:
  logger:
    - class_path: lightning.pytorch.loggers.wandb.WandbLogger
      init_args:
        project: test-project
        # put your wandb arguments here

(adapted from CLI docs with callbacks)

To log the gradient norm, use the logging facilities in the LightningModule: https://lightning.ai/docs/pytorch/stable/debug/debugging_intermediate.html#look-out-for-exploding-gradients