Closed dniku closed 4 years ago
Hi, Glad to see the lipreading model works in your real demo. Thanks for sharing!
I maintain a queue of frames from the webcam, and I pass 30 last frames into the network (see the model_input = ... line). From what I understand, it's what is used by the LFW dataset. Is this correct? Is there a better value for the queue length? A: There are 29 frames in each utterance in LRW. We used 29 frames to report the performance on the test set. You could try 29 frames as well. Since the model trained using variable-length augmentation, our model should generalise well to variations on the sequence length. You could try approximating the word's boundary and fed it for evaluation.
What is the minimum value for the queue length that will work? Is it 5 because of the kernel size of the initial 3D convolution? A: Yes.
I'm not sure what lengths means (a parameter expected by model.forward()). In main.py, the extract_feats() function sets lengths to a singleton list with the number of frames, but surely that can't be its sole purpose? There is also some weird averaging going on in _average_batch() that I don't understand. What is the optimal value of lengths for a stream from a webcam? A: The function of _average_batch() performs temporal average pooling operation, where the padding frames are excluded.
Is it correct that the model outputs logits, and to obtain probabilities I need to apply softmax? A: If you would like to have probabilities, you could use the softmax function to normalise the logits.
Thanks!
@dniku did you get it working well, or better than the code you provided above?
@dniku Hi there. Thank you for this additional code! This definitely makes things easier for running the code on any custom video. I have one question though. Don't you think we need to add queue = deque(maxlen=args.queue_length)
again after line 117, so as to allow the code to gather the total queue_length number of frames again and then give an output prediction and confidence?
@ZeeRizvee certainly no. The maxlen
parameter sets the maximum size of the queue. If one adds to the queue more elements from the right, the leftmost ones will be removed. This means that the if len(queue) >= args.queue_length:
condition checks whether the queue already has enough elements to feed the input into the network, and this happens after queue_length
frames. After that, we can produce a prediction at each frame, without resetting the queue to the empty state.
Hey @dniku and @mpc001,
I modified your code to allow a .mov video file to be fed into cv2 and output predictions saved to files so that the code can run smoothly on an AWS ec2 instance without a gpu. I am not using an gpu since AWS refuses to provide me with one due to the chip shortage. The code is shown below:
import argparse
import json
from collections import deque
from contextlib import contextmanager
from pathlib import Path
import cv2
import face_alignment
import numpy as np
import torch
from torchvision.transforms.functional import to_tensor
from lipreading.model import Lipreading
from preprocessing.transform import warp_img, cut_patch
STD_SIZE = (256, 256)
STABLE_PNTS_IDS = [33, 36, 39, 42, 45]
START_IDX = 48
STOP_IDX = 68
CROP_WIDTH = CROP_HEIGHT = 96
@contextmanager
def VideoCapture(*args, **kwargs):
cap = cv2.VideoCapture(*args, **kwargs)
try:
yield cap
finally:
cap.release()
def load_model(config_path: Path):
with config_path.open() as fp:
config = json.load(fp)
tcn_options = {
'num_layers': config['tcn_num_layers'],
'kernel_size': config['tcn_kernel_size'],
'dropout': config['tcn_dropout'],
'dwpw': config['tcn_dwpw'],
'width_mult': config['tcn_width_mult'],
}
return Lipreading(
num_classes=500,
tcn_options=tcn_options,
backbone_type=config['backbone_type'],
relu_type=config['relu_type'],
width_mult=config['width_mult'],
extract_feats=False,
)
def visualize_probs(vocab, probs, col_width=4, col_height=300):
num_classes = len(probs)
out = np.zeros((col_height, num_classes * col_width + (num_classes - 1), 3), dtype=np.uint8)
for i, p in enumerate(probs):
x = (col_width + 1) * i
cv2.rectangle(out, (x, 0), (x + col_width - 1, round(p * col_height)), (255, 255, 255), 1)
top = np.argmax(probs)
print(f'Prediction: {vocab[top]}')
print(f'Confidence: {probs[top]:.3f}')
cv2.putText(out, f'Prediction: {vocab[top]}', (10, out.shape[0] - 30), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.5,color=(255, 255, 255))
cv2.putText(out, f'Confidence: {probs[top]:.3f}', (10, out.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.5,color=(255, 255, 255))
return out
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--config-path', type=Path, default=Path('configs/lrw_resnet18_mstcn.json'))
parser.add_argument('--model-path', type=Path, default=Path('models/lrw_resnet18_mstcn_adamw_s3.pth.tar'))
parser.add_argument('--device', type=str, default='cpu')
# Victims.MOV
parser.add_argument('--queue-length', type=int, default=69)
# Stage.MOV
# parser.add_argument('--queue-length', type=int, default=38)
args = parser.parse_args()
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device=args.device)
model = load_model(args.config_path)
model.load_state_dict(torch.load(Path(args.model_path), map_location=args.device)['model_state_dict'])
model = model.to(args.device)
mean_face_landmarks = np.load(Path('preprocessing/20words_mean_face.npy'))
with Path('labels/500WordsSortedList.txt').open() as fp:
vocab = fp.readlines()
assert len(vocab) == 500
queue = deque(maxlen=args.queue_length)
with VideoCapture('Victims.MOV') as cap:
Patch_imshow_index = 1
Vis_imshow_index = 1
Camera_imshow_index = 1
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(length)
while True:
ret, image_np = cap.read()
if not ret:
break
image_np = cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB)
all_landmarks = fa.get_landmarks(image_np)
if all_landmarks:
landmarks = all_landmarks[0]
# BEGIN PROCESSING
trans_frame, trans = warp_img(
landmarks[STABLE_PNTS_IDS, :], mean_face_landmarks[STABLE_PNTS_IDS, :], image_np, STD_SIZE)
trans_landmarks = trans(landmarks)
patch = cut_patch(
trans_frame, trans_landmarks[START_IDX:STOP_IDX], CROP_HEIGHT // 2, CROP_WIDTH // 2)
# cv2.imshow('patch', cv2.cvtColor(patch, cv2.COLOR_RGB2BGR))
path="testing_picture_outputs/" + "Patch_Test_Out_"+ str(Patch_imshow_index) + '.jpg'
cv2.imwrite(path, cv2.cvtColor(patch, cv2.COLOR_RGB2BGR))
Patch_imshow_index += 1
patch_torch = to_tensor(cv2.cvtColor(patch, cv2.COLOR_RGB2GRAY)).to(args.device)
queue.append(patch_torch)
if len(queue) >= args.queue_length:
with torch.no_grad():
model_input = torch.stack(list(queue), dim=1).unsqueeze(0)
logits = model(model_input, lengths=[args.queue_length])
probs = torch.nn.functional.softmax(logits, dim=-1)
probs = probs[0].detach().cpu().numpy()
vis = visualize_probs(vocab, probs)
# cv2.imshow('probs', vis)
path = "testing_picture_outputs/" + "Vis_Test_Out_"+ str(Vis_imshow_index) + '.jpg'
cv2.imwrite(path, vis)
Vis_imshow_index += 1
# END PROCESSING
for x, y in landmarks:
cv2.circle(image_np, (int(x), int(y)), 2, (0, 0, 255))
# cv2.imshow('camera', cv2.cvtColor(image_np, cv2.COLOR_RGB2BGR))
path = "testing_picture_outputs/" + "Camera_Test_Out_"+ str(Camera_imshow_index) + '.jpg'
cv2.imwrite(path, cv2.cvtColor(image_np, cv2.COLOR_RGB2BGR))
Camera_imshow_index += 1
key = cv2.waitKey(1)
if key in {27, ord('q')}: # 27 is Esc
break
elif key == ord(' '):
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
The code seems to run and detect Mouth ROIs well but the word predictions are way off no matter what word I try. I changed the frames to 69 (parser.add_argument('--queue-length', type=int, default=69) since the number of ROIs for my video of victims is 71. Can you see anything I have done wrong? Any help is greatly appreciated!
@Cerebex @dniku did you try doing multiple words prediction?
I think I managed to connect your project to a stream from a webcam, and I got it reasonably correct: it works on my machine and seems to produce outputs that somewhat resemble the words that I'm pronouncing.
I'm not sure about some details though. Would you be able to clarify them?
model_input = ...
line). From what I understand, it's what is used by the LFW dataset. Is this correct? Is there a better value for the queue length?lengths
means (a parameter expected bymodel.forward()
). Inmain.py
, theextract_feats()
function setslengths
to a singleton list with the number of frames, but surely that can't be its sole purpose? There is also some weird averaging going on in_average_batch()
that I don't understand. What is the optimal value oflengths
for a stream from a webcam?Here is my implementation. It is self-contained and should work if you put it in the root of the repository. The only library dependency is
face-alignment
(pip install --user face-alignment
) that I used for extracting keypoints instead of dlib. The most interesting part is between theBEGIN PROCESSING
/END PROCESSING
comments.