Currently, when we compile our models, there is a slew of logging messages from torch that make it hard to find potential errors and to read training logs reasonably. We should figure out how to disable these; likely the root cause is that we set the log level for the root logger to INFO in https://github.com/mlfoundations/open_lm/blob/main/open_lm/logger.py, and we should instead set the log level to WARN there, then use an openlm specific logger that's set to log at INFO.
Currently, when we compile our models, there is a slew of logging messages from torch that make it hard to find potential errors and to read training logs reasonably. We should figure out how to disable these; likely the root cause is that we set the log level for the root logger to INFO in https://github.com/mlfoundations/open_lm/blob/main/open_lm/logger.py, and we should instead set the log level to WARN there, then use an openlm specific logger that's set to log at INFO.