Open CurtisDisc opened 2 months ago
@CurtisDisc hello,
Thank you for reaching out and providing detailed information about your issue. It looks like you're encountering a problem with maintaining the original frame rate and resolution of your video during pose estimation.
To ensure that all frames are analyzed and the video length remains unaltered, you can adjust the vid_stride
parameter to 1
, which you have already done. This ensures that every frame is processed. However, the warning about imgsz
being updated to [1920, 1088]
indicates that the resolution needs to be a multiple of the model's stride (32 in this case).
Here's a refined version of your code to help maintain the original frame rate and resolution:
from ultralytics import YOLO, checks, hub
checks()
hub.login('9e04cb143f43ef901c5251f59fe7bce026278b0054')
model = YOLO('yolov8l-pose.pt')
source = 'https://youtu.be/Ku5sXKIrwL8?si=PTIyvOeCNqg6uh_Z'
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save=True, show_boxes=False,
show_labels=False, half=True)
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
imgsz
to (1920, 1088)
ensures compatibility.vid_stride=1
, you ensure that every frame is processed, maintaining the original frame rate.If you continue to experience issues, please ensure you are using the latest version of the Ultralytics packages. You can update them using:
pip install --upgrade ultralytics
Feel free to reach out if you have any further questions or run into any other issues. We're here to help! 😊
Hello, I have fixed the warning issue with your imgsz=(1920, 1088) suggestion but I am still only getting a 8 sec .avi file in my runs. Is there a way to have it be a .mp4? I also ran it with save_frames=True and it only produced 200 frames. I am using Google Colab, does that have something to do with it?
--
Curtis Michael Marxen c | 405.923.6170 e @.*** Disclaimer: This e-mail is intended only for the person addressed. It may contain confidential information and/or privileged material. If you receive this in error, please notify the sender immediately and delete the information from your computer. Please do not copy or use it for any purpose nor disclose its contents to any other person.
On Sunday, August 25, 2024 at 02:33:24 AM EDT, Glenn Jocher ***@***.***> wrote:
@CurtisDisc hello,
Thank you for reaching out and providing detailed information about your issue. It looks like you're encountering a problem with maintaining the original frame rate and resolution of your video during pose estimation.
To ensure that all frames are analyzed and the video length remains unaltered, you can adjust the vid_stride parameter to 1, which you have already done. This ensures that every frame is processed. However, the warning about imgsz being updated to [1920, 1088] indicates that the resolution needs to be a multiple of the model's stride (32 in this case).
Here's a refined version of your code to help maintain the original frame rate and resolution: from ultralytics import YOLO, checks, hub
checks()
hub.login('9e04cb143f43ef901c5251f59fe7bce026278b0054')
model = YOLO('yolov8l-pose.pt')
source = 'https://youtu.be/Ku5sXKIrwL8?si=PTIyvOeCNqg6uh_Z'
results = model.predict(source, stream=True, imgsz=(1920, 1088), vid_stride=1, max_det=10, save=True, show_boxes=False, show_labels=False, half=True)
for result in results: boxes = result.boxes # Boxes object for bounding box outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs obb = result.obb # Oriented boxes object for OBB outputs Key Points:
If you continue to experience issues, please ensure you are using the latest version of the Ultralytics packages. You can update them using: pip install --upgrade ultralytics Feel free to reach out if you have any further questions or run into any other issues. We're here to help! 😊
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Ultralytics YOLOv8.2.82 🚀 Python-3.10.12 torch-2.3.1+cu121 CUDA:0 (Tesla T4, 15102MiB) Setup complete ✅ (2 CPUs, 12.7 GB RAM, 34.2/78.2 GB disk) Ultralytics HUB: New authentication successful ✅
1/1: https://youtu.be/HFxv3xcLsm8... Success ✅ (1200 frames of shape 1920x1080 at 60.00 FPS)
0: 640x1088 1 person, 21.0ms 0: 640x1088 1 person, 23.1ms 0: 640x1088 1 person, 14.9ms 0: 640x1088 1 person, 17.5ms 0: 640x1088 1 person, 15.3ms 0: 640x1088 1 person, 25.1ms 0: 640x1088 1 person, 26.7ms 0: 640x1088 1 person, 13.4ms 0: 640x1088 1 person, 14.5ms 0: 640x1088 1 person, 14.3ms 0: 640x1088 1 person, 20.8ms 0: 640x1088 1 person, 27.6ms 0: 640x1088 1 person, 19.2ms 0: 640x1088 1 person, 15.1ms 0: 640x1088 1 person, 16.4ms 0: 640x1088 1 person, 15.1ms 0: 640x1088 1 person, 13.9ms 0: 640x1088 1 person, 16.6ms 0: 640x1088 1 person, 21.9ms 0: 640x1088 1 person, 22.0ms 0: 640x1088 1 person, 20.5ms 0: 640x1088 1 person, 14.9ms 0: 640x1088 1 person, 30.2ms 0: 640x1088 1 person, 15.4ms 0: 640x1088 1 person, 20.2ms 0: 640x1088 1 person, 13.8ms 0: 640x1088 1 person, 16.3ms 0: 640x1088 1 person, 14.9ms 0: 640x1088 1 person, 20.5ms 0: 640x1088 1 person, 53.6ms 0: 640x1088 1 person, 42.3ms 0: 640x1088 1 person, 47.2ms 0: 640x1088 1 person, 21.4ms 0: 640x1088 1 person, 70.6ms 0: 640x1088 1 person, 50.5ms 0: 640x1088 1 person, 47.9ms 0: 640x1088 1 person, 49.4ms 0: 640x1088 1 person, 31.6ms 0: 640x1088 1 person, 17.2ms 0: 640x1088 1 person, 53.1ms 0: 640x1088 1 person, 33.9ms 0: 640x1088 1 person, 29.6ms 0: 640x1088 1 person, 20.4ms 0: 640x1088 1 person, 13.1ms 0: 640x1088 1 person, 17.8ms 0: 640x1088 1 person, 15.5ms 0: 640x1088 1 person, 14.7ms 0: 640x1088 1 person, 39.2ms 0: 640x1088 1 person, 30.4ms 0: 640x1088 1 person, 21.9ms 0: 640x1088 1 person, 16.6ms 0: 640x1088 1 person, 28.9ms 0: 640x1088 1 person, 14.2ms 0: 640x1088 1 person, 24.7ms 0: 640x1088 1 person, 29.3ms 0: 640x1088 1 person, 15.7ms 0: 640x1088 1 person, 13.3ms 0: 640x1088 1 person, 15.3ms 0: 640x1088 1 person, 15.5ms 0: 640x1088 1 person, 14.0ms 0: 640x1088 1 person, 20.6ms 0: 640x1088 1 person, 14.7ms 0: 640x1088 1 person, 20.1ms 0: 640x1088 1 person, 28.2ms 0: 640x1088 1 person, 13.4ms 0: 640x1088 1 person, 18.6ms 0: 640x1088 1 person, 14.8ms 0: 640x1088 1 person, 28.2ms 0: 640x1088 1 person, 21.1ms 0: 640x1088 1 person, 15.8ms 0: 640x1088 1 person, 14.6ms 0: 640x1088 1 person, 14.5ms 0: 640x1088 1 person, 15.1ms 0: 640x1088 1 person, 15.0ms 0: 640x1088 1 person, 8.8ms Speed: 12.7ms preprocess, 22.8ms inference, 3.8ms postprocess per image at shape (1, 3, 640, 1088) Results saved to runs/pose/predict4
I changed the imgsz=(1088, 1920) and that fixed the resolution problem.
Ultralytics YOLOv8.2.82 🚀 Python-3.10.12 torch-2.3.1+cu121 CUDA:0 (Tesla T4, 15102MiB) Setup complete ✅ (2 CPUs, 12.7 GB RAM, 34.3/78.2 GB disk) Ultralytics HUB: New authentication successful ✅
1/1: https://youtu.be/HFxv3xcLsm8... Success ✅ (1200 frames of shape 1920x1080 at 60.00 FPS)
0: 1088x1920 1 person, 26.6ms 0: 1088x1920 1 person, 16.1ms 0: 1088x1920 1 person, 17.8ms 0: 1088x1920 1 person, 17.4ms 0: 1088x1920 1 person, 24.7ms 0: 1088x1920 1 person, 16.7ms 0: 1088x1920 1 person, 19.7ms 0: 1088x1920 1 person, 18.8ms 0: 1088x1920 1 person, 15.8ms 0: 1088x1920 1 person, 16.7ms 0: 1088x1920 1 person, 18.3ms 0: 1088x1920 1 person, 18.7ms 0: 1088x1920 1 person, 31.2ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 26.3ms 0: 1088x1920 1 person, 24.2ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 20.2ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 20.9ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 25.6ms 0: 1088x1920 1 person, 27.4ms 0: 1088x1920 1 person, 22.9ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 31.7ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 29.1ms 0: 1088x1920 1 person, 16.3ms 0: 1088x1920 1 person, 16.9ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 36.5ms 0: 1088x1920 1 person, 20.4ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.3ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 19.6ms 0: 1088x1920 1 person, 16.5ms 0: 1088x1920 1 person, 17.8ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 29.8ms 0: 1088x1920 1 person, 16.3ms 0: 1088x1920 1 person, 26.6ms 0: 1088x1920 1 person, 16.3ms 0: 1088x1920 1 person, 19.5ms 0: 1088x1920 1 person, 17.7ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.5ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.5ms 0: 1088x1920 1 person, 22.4ms 0: 1088x1920 1 person, 18.9ms 0: 1088x1920 1 person, 17.3ms 0: 1088x1920 1 person, 18.1ms 0: 1088x1920 1 person, 32.7ms 0: 1088x1920 1 person, 16.4ms 0: 1088x1920 1 person, 16.3ms 0: 1088x1920 1 person, 22.8ms 0: 1088x1920 1 person, 39.3ms 0: 1088x1920 1 person, 40.6ms 0: 1088x1920 1 person, 45.8ms 0: 1088x1920 1 person, 59.7ms 0: 1088x1920 1 person, 34.2ms 0: 1088x1920 1 person, 32.4ms 0: 1088x1920 1 person, 17.4ms 0: 1088x1920 1 person, 54.8ms 0: 1088x1920 1 person, 16.4ms Speed: 15.0ms preprocess, 22.3ms inference, 3.8ms postprocess per image at shape (1, 3, 1088, 1920) Results saved to runs/pose/predict5
Hello Curtis,
Thank you for the update! I'm glad to hear that adjusting the imgsz
parameter resolved the resolution issue.
Regarding the video output format and frame count, it seems like there might be a couple of factors at play, especially when using Google Colab. Here are a few suggestions to help you achieve your desired output:
To save the output video in .mp4
format instead of .avi
, you can use the save
parameter with a custom path that specifies the .mp4
extension. Here’s how you can modify your code:
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save='runs/pose/predict5/output.mp4',
show_boxes=False, show_labels=False, half=True)
If you are still experiencing issues with the frame count, it might be related to the processing capabilities and limitations of Google Colab. Here are a few steps to ensure all frames are processed:
Here’s an example that combines the above suggestions:
from ultralytics import YOLO, checks, hub
checks()
hub.login('your_api_key_here')
model = YOLO('yolov8l-pose.pt')
source = 'https://youtu.be/HFxv3xcLsm8'
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save='runs/pose/predict5/output.mp4',
show_boxes=False, show_labels=False, half=True)
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
pip install --upgrade ultralytics
.If the issue persists, please let us know, and we can further investigate. Thank you for your patience and for being part of the YOLO community! 😊
save='runs/pose/predict5/output.mp4' is causing an error invalid syntax. I would to run this locally on my M1 mac mini but I cant seem to get python to pip install ultralytics. I have python working but then it doesn't recognize the pip install ultralytics command. Not sure what I am missing.
Hello @CurtisDisc,
Thank you for your patience and for providing additional details. Let's address the issues you're encountering.
The syntax error with save='runs/pose/predict5/output.mp4'
is likely due to the placement of the save
parameter. It should be included within the model.predict()
function call. Here’s the corrected code snippet:
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save=True,
save_path='runs/pose/predict5/output.mp4',
show_boxes=False, show_labels=False, half=True)
For your M1 Mac Mini, you might need to ensure that you have the correct environment setup for installing Ultralytics. Here are the steps to help you get started:
Install Homebrew (if not already installed):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install Python using Homebrew:
brew install python
Create a Virtual Environment:
python3 -m venv ultralytics-env
source ultralytics-env/bin/activate
Upgrade pip:
pip install --upgrade pip
Install Ultralytics:
pip install ultralytics
Once you have Ultralytics installed, you can run your script locally. Here’s a complete example to ensure everything is set up correctly:
from ultralytics import YOLO, checks, hub
checks()
hub.login('your_api_key_here')
model = YOLO('yolov8l-pose.pt')
source = 'https://youtu.be/HFxv3xcLsm8'
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save=True,
save_path='runs/pose/predict5/output.mp4',
show_boxes=False, show_labels=False, half=True)
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
pip install --upgrade ultralytics
.If you encounter any further issues, please let us know. We're here to help and ensure you have a smooth experience with Ultralytics. Thank you for being part of the YOLO community! 😊
I think I am really close but just a few things seem to be wrong.
Last login: Sun Sep 8 15:44:59 on ttys000 c****@Curtiss-Mac-mini ~ % python3 -m venv ultralytics-env source ultralytics-env/bin/activate (ultralytics-env) c****@Curtiss-Mac-mini ~ % ultralytics YOLO, checks, hub
checks()
hub.login('9e04cb143f43ef901c5251f59fe7bce026278b0054')
model= YOLO('yolov8l-pose.pt')
source= 'https://youtu.be/HFxv3xcLsm8'
results= model.predict(source, stream=True, imgsz=(1920, 1088), vid_stride=1, max_det=10, save=True, show_boxes=False, show_labels=False, half=True)
for result in results: boxes = result.boxes # Boxes object for bounding box outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs obb = result.obb # Oriented boxes object for OBB outputs
WARNING ⚠️ argument 'YOLO,' does not require trailing comma ',', updating to 'YOLO'.
Traceback (most recent call last):
File "/Users/c**/ultralytics-env/bin/ultralytics", line 8, in
Arguments received: ['yolo', 'YOLO,', 'checks,', 'hub']. Ultralytics 'yolo' commands use the following syntax:
yolo TASK MODE ARGS
Where TASK (optional) is one of {'pose', 'detect', 'classify', 'obb', 'segment'}
MODE (required) is one of {'export', 'train', 'benchmark', 'val', 'track', 'predict'}
ARGS (optional) are any number of custom 'arg=value' pairs like 'imgsz=320' that override defaults.
See all ARGS at https://docs.ultralytics.com/usage/cfg or with 'yolo cfg'
1. Train a detection model for 10 epochs with an initial learning_rate of 0.01
yolo train data=coco8.yaml model=yolov8n.pt epochs=10 lr0=0.01
2. Predict a YouTube video using a pretrained segmentation model at image size 320:
yolo predict model=yolov8n-seg.pt source='https://youtu.be/LNwODJXcvt4' imgsz=320
3. Val a pretrained detection model at batch-size 1 and image size 640:
yolo val model=yolov8n.pt data=coco8.yaml batch=1 imgsz=640
4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128
5. Explore your datasets using semantic search and SQL with a simple GUI powered by Ultralytics Explorer API
yolo explorer data=data.yaml model=yolov8n.pt
6. Streamlit real-time webcam inference GUI
yolo streamlit-predict
7. Run special commands:
yolo help
yolo checks
yolo version
yolo settings
yolo copy-cfg
yolo cfg
Docs: https://docs.ultralytics.com
Community: https://community.ultralytics.com
GitHub: https://github.com/ultralytics/ultralytics
zsh: unknown file attribute: y
Hello!
It looks like you're almost there! The error seems to be due to a syntax issue in your command. Let's address that and get you back on track. 😊
The error is caused by trying to execute Python code directly in the shell. Instead, you should run your Python script using a .py
file or directly in a Python interactive session.
Here's how you can do it:
Create a Python Script: Save your code in a file, e.g., pose_estimation.py
.
Run the Script: Execute it using Python.
pose_estimation.py
)from ultralytics import YOLO, checks, hub
checks()
hub.login('9e04cb143f43ef901c5251f59fe7bce026278b0054')
model = YOLO('yolov8l-pose.pt')
source = 'https://youtu.be/HFxv3xcLsm8'
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save=True,
show_boxes=False, show_labels=False, half=True)
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
python3 pose_estimation.py
pip install --upgrade ultralytics
.source ultralytics-env/bin/activate
.If you have any more questions or run into further issues, feel free to ask. We're here to help! 🚀
Okay So I have it up and running locally. Now instead of source from youtube, how do I source a file in my documents? Then save into folder in documents, (result.save(filename="result.mov")? Also my speeds are not great, Anyway to fix that? Can I use the GPU? 'device=GPU' gave an error.
Last login: Mon Sep 9 21:22:10 on ttys000 c**@Curtiss-Mac-mini ~ % python3 -m venv ultralytics-env source ultralytics-env/bin/activate (ultralytics-env) c**@Curtiss-Mac-mini ~ % cd documents (ultralytics-env) c**@Curtiss-Mac-mini documents % cd pythonProjects (ultralytics-env) c**@Curtiss-Mac-mini pythonProjects % touch pose_estimation.py (ultralytics-env) c**@Curtiss-Mac-mini pythonProjects % python3 pose_estimation.py Ultralytics YOLOv8.2.90 🚀 Python-3.12.5 torch-2.4.1 CPU (Apple M2) Setup complete ✅ (8 CPUs, 24.0 GB RAM, 164.2/228.3 GB disk) Ultralytics HUB: New authentication successful ✅
1/1: https://youtu.be/HFxv3xcLsm8... Success ✅ (1200 frames of shape 1920x1080 at 60.00 FPS)
0: 640x1088 1 person, 87990.9ms 0: 640x1088 1 person, 87970.6ms Speed: 3.5ms preprocess, 87980.7ms inference, 0.5ms postprocess per image at shape (1, 3, 640, 1088) Results saved to runs/pose/predict4 (ultralytics-env) c**@Curtiss-Mac-mini pythonProjects %
Hello!
Great to hear that you have it running locally! Let's address your questions:
To use a video file from your local documents, simply provide the file path as the source
. Here's an example:
source = '/path/to/your/video.mp4'
To save the results to a specific folder, you can specify the save_dir
parameter:
results = model.predict(source, stream=True, imgsz=(1920, 1088),
vid_stride=1, max_det=10, save=True,
save_dir='/path/to/save/directory',
show_boxes=False, show_labels=False, half=True)
For better performance, especially on an M1 Mac, you can try using the GPU. However, PyTorch's support for M1 GPUs is still evolving. You can attempt to use the mps
device, which is Apple's Metal Performance Shaders:
model = YOLO('yolov8l-pose.pt', device='mps')
Ensure your PyTorch version supports MPS by checking the PyTorch documentation.
Feel free to reach out if you have more questions. Happy coding! 😊
Search before asking
Question
I am running pose estimation of a video and wanting to have the same resolution and frame rate is the saved result video. How do I get it to analysis all frames so that video length is not altered.
from ultralytics import YOLO, checks, hub checks()
hub.login('9e04cb143f43ef901c5251f59fe7bce026278b0054')
model = YOLO('yolov8l-pose.pt')
source='https://youtu.be/Ku5sXKIrwL8?si=PTIyvOeCNqg6uh_Z'
results = model.predict(source, stream=True, imgsz=(1920,1080), vid_stride=1, max_det=10, save=True, show_boxes=False, show_labels=False, half=True,)
for result in results: boxes = result.boxes # Boxes object for bounding box outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs obb = result.obb # Oriented boxes object for OBB outputs
Ultralytics YOLOv8.2.81 🚀 Python-3.10.12 torch-2.3.1+cu121 CUDA:0 (Tesla T4, 15102MiB) Setup complete ✅ (2 CPUs, 12.7 GB RAM, 33.9/78.2 GB disk) Ultralytics HUB: New authentication successful ✅
WARNING ⚠️ imgsz=[1920, 1080] must be multiple of max stride 32, updating to [1920, 1088] 1/1: https://youtu.be/Ku5sXKIrwL8?si=PTIyvOeCNqg6uh_Z... Success ✅ (3012 frames of shape 1920x1080 at 60.00 FPS)
Additional
No response