Closed aztecman closed 2 years ago
if init is not None and args.init_scale: init_losses = lpips_model(x_in, init) loss = loss + init_losses.sum() * args.init_scale
instead should be
if init is not None and init_scale: init_losses = lpips_model(x_in, init) loss = loss + init_losses.sum() * init_scale
because earlier we might have set init_scale to be equal to frame_scale (if we are doing animation)
otherwise, init_scale is just a dead end variable
Thanks. I've just applied the change.
if init is not None and args.init_scale: init_losses = lpips_model(x_in, init) loss = loss + init_losses.sum() * args.init_scale
instead should be
if init is not None and init_scale: init_losses = lpips_model(x_in, init) loss = loss + init_losses.sum() * init_scale
because earlier we might have set init_scale to be equal to frame_scale (if we are doing animation)
otherwise, init_scale is just a dead end variable