Open danbochman opened 6 months ago
how to run this code, please provide the step for text to image , please
When training neural networks, especially large models, it's common to use mixed precision training to save memory and speed up computations. This involves using FP16 (half-precision) for certain operations while retaining FP32 (full-precision) for others where higher precision is necessary. The code snippet you provided shows the use of mixed precision during training but not during sampling. This is because sampling (inference) typically doesn't require the same precision optimization as training, but it can still benefit from FP16 for memory efficiency.
To ensure that your sampling loop also benefits from FP16 precision, you can add the necessary casting.
import torch
@torch.no_grad()
@partial(cast_torch_tensor, cast_fp16=True)
def sample(self, *args, **kwargs):
self.print_untrained_unets()
if not self.is_main:
kwargs["use_tqdm"] = False
with torch.cuda.amp.autocast(): # Use autocast for FP16 inference
output = self.imagen.sample(*args, device=self.device, **kwargs)
return output
simplified code for cast_torch_tensor decorator that supports FP16 casting:
from functools import wraps
def cast_torch_tensor(func=None, cast_fp16=False):
@wraps(func)
def wrapper(*args, **kwargs):
args = tuple(arg.half() if cast_fp16 and isinstance(arg, torch.Tensor) else arg for arg in args)
kwargs = {k: v.half() if cast_fp16 and isinstance(v, torch.Tensor) else v for k, v in kwargs.items()}
return func(*args, **kwargs)
return wrapper
Hope this helps, Thanks
During training the forward method casts to FP16 but during sampling no
I tried casting to FP16 and something in the loop changes to
float32
even if the inputs arefloat16
I wonder if you have already encountered that and if that's the reason there's no casting to FP16 during samplingBest regards and thanks for the great repo,