Open jmhummel opened 1 month ago
I noticed that when I process a video frame with a standard 16:9 aspect ratio, the processed output frame isn't zero-padded, and the aspect ratio is distorted. Is this intended?
I've included an example I've modified from the tutorial notebook:
from torchvision.transforms import transforms from llava.model.builder import load_pretrained_model import numpy as np from PIL import Image import warnings from decord import VideoReader, cpu warnings.filterwarnings("ignore") # Load the OneVision model pretrained = "lmms-lab/llava-onevision-qwen2-7b-ov" model_name = "llava_qwen" device = "cuda" device_map = "auto" tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map, attn_implementation="sdpa") model.eval() # Function to extract frames from video def load_video(video_path, max_frames_num): if type(video_path) == str: vr = VideoReader(video_path, ctx=cpu(0)) else: vr = VideoReader(video_path[0], ctx=cpu(0)) total_frame_num = len(vr) uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int) frame_idx = uniform_sampled_frames.tolist() spare_frames = vr.get_batch(frame_idx).asnumpy() return spare_frames # (frames, height, width, channels) # Load and process video video_path = "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" video_frames = load_video(video_path, 16) print(video_frames.shape) # (16, 1024, 576, 3) image_tensors = [] frames = image_processor.preprocess(video_frames, return_tensors="pt")["pixel_values"].half().cuda() image_tensors.append(frames) print(image_tensors[0].shape) # (16, 3, 384, 384) # Display an original frame frame_idx = 8 original_frame = Image.fromarray(video_frames[frame_idx]) original_frame.show("OriginalFrame") # Display a processed frame processed_frame = transforms.ToPILImage()(image_tensors[0][frame_idx]*0.5+0.5) processed_frame.show("ProcessedFrame")
Original Frame
Processed Frame
Is this intended behavior? Also, I'd just like to confirm: the video frames should be RGB channel order, not BGR, correct?
Is this intended behavior? yes. the video frames should be RGB channel order, not BGR, correct? Yes
I noticed that when I process a video frame with a standard 16:9 aspect ratio, the processed output frame isn't zero-padded, and the aspect ratio is distorted. Is this intended?
I've included an example I've modified from the tutorial notebook:
Original Frame
Processed Frame
Is this intended behavior? Also, I'd just like to confirm: the video frames should be RGB channel order, not BGR, correct?