Closed Turavis closed 3 years ago
👋 Hello @Turavis, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7
. To install run:
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
Hi @Turavis , Did you got any solution?
I am also facing the similar issue
@Turavis @Aar-Kay the videoloader is a simple cv2 loop. If the cv2 cap.read() method return value is False the video is considered finished. https://github.com/ultralytics/yolov5/blob/c09964c27cc275c8e32630715cca5be77078dae2/utils/datasets.py#L160-L173
@Aar-Kay I figured that it's a problem with resolution, after resizing my video to 720p it worked fine. Unfortunately I have no idea how to get it to work with 1920x1080 files.
If anyone knows what causes this bug I'd appreciate some help. Thanks!
@glenn-jocher @Turavis Thank you for your reply :) Exactly after a certain number of frames cap.read() returns a False value which causes the inference to stop.
@Turavis In my case, all the videos are recorded from GoPRo and the inference stop exactly at 23rd Frame. When I take this same video to 'ShotCut' and simply export it and save it, the inference is running fine. Here are the properties of the video which is working fine:
In my case, the only difference I note is the FPS ( 30 vs 29.97). It might have something deeper actually which I don't know yet as I am very new to this field. Also, I found the following 'Note' on OpenCV page which might be relevant :
I don't know yet what is causing cap.read() to return a False value. I have read on certain openCV forums that others have also faced this problem of cap.read() returning False values. I will update you if I find any solution to it and please let me know if you find any :)
@Aar-Kay interesting. I would definitely raise the issue on the opencv repository, or +1 any existing issues there https://github.com/opencv/opencv
You might also want to write a custom python code that simply loops through your video and saves the return values to see if False occurs on just one frame or on all subsequent frames as well.
Hi @glenn-jocher
I wrote a manual program. I find out that cap.read() returns ‘False’ value specifically after frame number 23. After that all values are ‘True’. However, it is interesting to note that the count of 'total number of True’ is equal to ‘total number of frames’.
Below table shows the summary of returned value :
I don’t know what is the problem with cap.read() in reading the original videos.
Edit: Frame Number as read by openCV
@Aar-Kay another user @Snitte recently raised an identical issue, with a gopro video again at #2305
@glenn-jocher As per my analysis, the following maybe a solution to this problem:
Option 1. Since the frames at which the vidcap() returns the False value are the additional frames (for all such frames frame number is 0). So for all the frames for which ret_value = False and Frame number = 0 can be ignored or replaced with the last read frame.
Something like this:
cap = cv2.VideoCapture('GH100369.mp4')
def getFrame(ts): hasFrames,image = cap.read() fn = cap.get(cv2.CAP_PROP_POS_FRAMES) ts = cap.get(cv2.CAP_PROP_POS_MSEC) if hasFrames: print(f"Frame Number is ", {int(fn)}, "Time stamp is", {float(ts)}) return hasFrames
ts = cap.get(cv2.CAP_PROP_POS_MSEC) print(f'Time stamp is ', ts)
success = getFrame(ts)
while success or ts == 0: success = getFrame(ts) ts = cap.get(cv2.CAP_PROP_POS_MSEC) print('Time stamp in while loop is', ts) fn = cap.get(cv2.CAP_PROP_POS_FRAMES) print('Frame number in while loop is', fn)
Option 2. Maybe we do not have proper video codecs installed, in which case we need to install the codecs, followed by re-compiling and re-installing OpenCV. https://www.pyimagesearch.com/2016/12/26/opencv-resolving-nonetype-errors/ But then I don't understand why it is running for other videos. I don't know, I am very new to it.
Option 3. All the GoPro videos shot at 29.97, not 30fps (NTSC vs PAL format). Convert these videos into 30fps or 60 fps (in case of 59.94 fps).
Option 1 is working for me in the manual program but I was unable to make changes in YOLOv5. For option 2, I didn't want to take any chances and mess up my already working YOLOv5. I have to move ahead with option 3 which is working very well for me. But @Snitte @Turavis since you guys must be good at programming, I think you can try 'option 1' that will be less time-consuming.
After some testing ive found that if you just run the video through gopros video editor and set the fps that way you can process 120 fps videos without problems.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Unfortunately this is still an issue, but it's not an issue in the yolov5 repo. The upstream issue seems to be https://github.com/opencv/opencv/issues/15352#issuecomment-645435521 . The comment also contains a workaround suggestion, which adapted to here looks roughly like this:
diff --git a/utils/datasets.py b/utils/datasets.py
index 3504998..7918bcb 100755
--- a/utils/datasets.py
+++ b/utils/datasets.py
@@ -200,6 +200,11 @@ def __next__(self):
# Read video
self.mode = 'video'
ret_val, img0 = self.cap.read()
+ # see https://github.com/ultralytics/yolov5/issues/2064
+ if not ret_val and self.frame < self.frames - 1:
+ while not ret_val:
+ ret_val, img0 = self.cap.read()
+
@breunigs thanks for the info! Would you recommend applying this fix in a PR?
I have tested it with my GoPro videos and it works for them. If you're fine with the general approach I can ready a PR that bails instead of infinitely looping. The upstream comment mentions this is a possibility, but I have not seen it on my GoPro files. In any case, a bogus error is easier to debug than the process hanging at some point.
@breunigs got it. Yes, please submit a PR, thanks!
Hello,
I have a problem with running a succesful video inference with some video files. I trained my traffic signs detection model using GTSDB dataset and tested it using some random youtube video and everything worked perfectly. However, whenever I try to do the same thing with my GoPro footage it stops after exactly 41 frames. I tried it on different GoPro videos and it's always the same number of frames, no matter how long the video is. I'm sure it's not a training problem as other videos worked perfectly and I even tried using these GoPro videos with COCO and the result is always the same. Process stops at 41 frames, no error, no warning, nothing.
Video properties:
After resizing the video and changing it to .avi instead of .mp4 it worked fine. I am inclined to believe it's a problem with either resolution / fps or bitrate.
Would appreciate some help as resizing every single video would be quite time consuming and tedious.
I stumbled upon an issue with the same problem #1202 but it was closed with no anwser.
EDIT: RTX3090, CUDA 11.0