Wisp already uses mixed precision for multiview-training, this change aligns the interactive renderer and sdf pipelines as well.
@torch.cuda.amp.autocast() added to render() and redraw() functions of RendererCore and WispApp. This is toggle-able through WispState.renderer.enable_amp.
BaseTrainer.iterate uses autocast on step() by default, this is controlled by the enable_amp flag passed to the trainer.
Wisp already uses mixed precision for multiview-training, this change aligns the interactive renderer and sdf pipelines as well.
@torch.cuda.amp.autocast()
added torender()
andredraw()
functions ofRendererCore
andWispApp
. This is toggle-able throughWispState.renderer.enable_amp
.BaseTrainer.iterate
uses autocast onstep()
by default, this is controlled by theenable_amp
flag passed to the trainer.Signed-off-by: operel operel@nvidia.com