NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
BSD 3-Clause "New" or "Revised" License
8.37k stars 1.39k forks source link

How to disable the defalf print when we use the apex? #960

Open Hanzy1996 opened 4 years ago

Hanzy1996 commented 4 years ago

In my code, I have to make the following definition in every epoch:

self.model, self.optimizer = amp.initialize(self.model, self.optimizer, opt_level="O1")

Every time I use the apex, the following part will be printed:

Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods. Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic

My confusion is that: is there any possible way to disable this print?

yurakuratov commented 2 years ago

Yes, just set verbosity=0 in amp.initialize call.