Closed ss8319 closed 1 year ago
Looking deeper into the code. This part is causing problems. num is some int value of -279496122328932608. Not sure where that is from. Where does the value of num originate from? The code shows:
with torch.no_grad():
batch_frames = []
for i in tqdm(range(num)):
if i == 0:
Therefore, the program doesn't enter the logic after the for loop, resulting in the error.
Hi. I am working on the code in the Colab Notebook in the repo, on PART II - Style Transfer with specialized VToonify-D model.
I am working through all the steps just fine but when I am at the Video Toonification code, I am able to go through the 'Visualize and Rescale Input' part fine but I cant run 'Perform Inference'. Running the code works well for the default input video, but when I am using my own video it's creating problems.
Running this: ` with torch.no_grad(): batch_frames = [] print(num) for i in tqdm(range(num)): if i == 0:
I = align_face(frame, landmarkpredictor) I = transform(I).unsqueeze(dim=0).to(device) s_w = pspencoder(I) s_w = vtoonify.zplus2wplus(s_w) s_w[:,:7] = exstyle[:,:7] else: success, frame = video_cap.read() frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) if scale <= 0.75: frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d) if scale <= 0.375: frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d) frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
videoWriter.release() video_cap.release() `
Gives:
0it [00:00, ?it/s]