Closed ryandeng1 closed 7 years ago
Thanks for the report. Can you give more infos on your system (windows, mac, linux), python version, and a minimal code to reproduce the error ?
Windows, Python 3.5. I had this error on Ubuntu 16.04 as well with Python 3.5. Basically I am taking in every frame of a 5 minute (1280 x 720) video, processing it, and then writing it to an output video file. clip1 = VideoFileClip("//.../.mp4") clip = clip1.fl_image(func) clip.write_videofile(video_output, audio=False)
This is the error I'm getting:
0%| | 21/7749 [00:31<3:07:42, 1.46s/it]Traceback (most recent call last):
File "C:/vehicle-detection/main.py", line 50, in
Thread 0x0000378c (most recent call first): File "C:\Program Files\Python35\lib\site-packages\tqdm_tqdm.py", line 97 in run File "C:\Program Files\Python35\lib\threading.py", line 914 in _bootstrap_inner File "C:\Program Files\Python35\lib\threading.py", line 882 in _bootstrap
Current thread 0x00002fc8 (most recent call first): File "C:\Program Files\Python35\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 116 in del
Thanks so much!
So apparently it is not a moviepy problem. The error appears during the generation of a frame, but is due to the frame generator, not Moviepy. You have a project called vehicle-detection/lane.py
in which you are doing a polyfit with an empty vector.
I see. Thanks so much!
Hi, I seem to have a similar problem, can you please help me out? I am using UBUNTU16, python3.6, and with tensorflow GPU.
when my code is running:
loading existing classifier... Building YOLO_small graph... Layer 1 : Type = Conv, Size = 7 7, Stride = 2, Filters = 64, Input channels = 3 Layer 2 : Type = Pool, Size = 2 2, Stride = 2 Layer 3 : Type = Conv, Size = 3 3, Stride = 1, Filters = 192, Input channels = 64 Layer 4 : Type = Pool, Size = 2 2, Stride = 2 Layer 5 : Type = Conv, Size = 1 1, Stride = 1, Filters = 128, Input channels = 192 Layer 6 : Type = Conv, Size = 3 3, Stride = 1, Filters = 256, Input channels = 128 Layer 7 : Type = Conv, Size = 1 1, Stride = 1, Filters = 256, Input channels = 256 Layer 8 : Type = Conv, Size = 3 3, Stride = 1, Filters = 512, Input channels = 256 Layer 9 : Type = Pool, Size = 2 2, Stride = 2 Layer 10 : Type = Conv, Size = 1 1, Stride = 1, Filters = 256, Input channels = 512 Layer 11 : Type = Conv, Size = 3 3, Stride = 1, Filters = 512, Input channels = 256 Layer 12 : Type = Conv, Size = 1 1, Stride = 1, Filters = 256, Input channels = 512 Layer 13 : Type = Conv, Size = 3 3, Stride = 1, Filters = 512, Input channels = 256 Layer 14 : Type = Conv, Size = 1 1, Stride = 1, Filters = 256, Input channels = 512 Layer 15 : Type = Conv, Size = 3 3, Stride = 1, Filters = 512, Input channels = 256 Layer 16 : Type = Conv, Size = 1 1, Stride = 1, Filters = 256, Input channels = 512 Layer 17 : Type = Conv, Size = 3 3, Stride = 1, Filters = 512, Input channels = 256 Layer 18 : Type = Conv, Size = 1 1, Stride = 1, Filters = 512, Input channels = 512 Layer 19 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 512 Layer 20 : Type = Pool, Size = 2 2, Stride = 2 Layer 21 : Type = Conv, Size = 1 1, Stride = 1, Filters = 512, Input channels = 1024 Layer 22 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 512 Layer 23 : Type = Conv, Size = 1 1, Stride = 1, Filters = 512, Input channels = 1024 Layer 24 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 512 Layer 25 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 1024 Layer 26 : Type = Conv, Size = 3 3, Stride = 2, Filters = 1024, Input channels = 1024 Layer 27 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 1024 Layer 28 : Type = Conv, Size = 3 3, Stride = 1, Filters = 1024, Input channels = 1024 Layer 29 : Type = Full, Hidden = 512, Input dimension = 50176, Flat = 1, Activation = 1 Layer 30 : Type = Full, Hidden = 4096, Input dimension = 512, Flat = 0, Activation = 1 Layer 32 : Type = Full, Hidden = 1470, Input dimension = 4096, Flat = 0, Activation = 0 2017-09-23 11:07:21.645197: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-23 11:07:21.645215: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-23 11:07:21.645237: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-09-23 11:07:21.645240: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-23 11:07:21.645247: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-09-23 11:07:22.013380: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-09-23 11:07:22.013814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate (GHz) 1.493 pciBusID 0000:01:00.0 Total memory: 3.95GiB Free memory: 3.90GiB 2017-09-23 11:07:22.013843: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2017-09-23 11:07:22.013848: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2017-09-23 11:07:22.013898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0) Loading complete! <yolo_pipeline.yolo_tf object at 0x7fd241cd0860>
This is where the error starts
[MoviePy] >>>> Building video /home/shuoyuan/examples/project_YOLO.mp4 [MoviePy] Writing video /home/shuoyuan/examples/project_YOLO.mp4 0%| | 0/26 [00:00<?, ?it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 4%|█▋ | 1/26 [00:00<00:06, 3.92it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 8%|███▍ | 2/26 [00:00<00:06, 3.99it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 12%|█████ | 3/26 [00:00<00:05, 4.06it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 15%|██████▊ | 4/26 [00:00<00:05, 4.16it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 19%|████████▍ | 5/26 [00:01<00:04, 4.24it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 23%|██████████▏ | 6/26 [00:01<00:04, 4.18it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 27%|███████████▊ | 7/26 [00:01<00:04, 4.14it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 31%|█████████████▌ | 8/26 [00:01<00:04, 4.20it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 35%|███████████████▏ | 9/26 [00:02<00:04, 4.20it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 38%|████████████████▌ | 10/26 [00:02<00:03, 4.22it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 42%|██████████████████▏ | 11/26 [00:02<00:03, 4.20it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 46%|███████████████████▊ | 12/26 [00:02<00:03, 4.22it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 50%|█████████████████████▌ | 13/26 [00:03<00:03, 4.32it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 54%|███████████████████████▏ | 14/26 [00:03<00:02, 4.40it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 58%|████████████████████████▊ | 15/26 [00:03<00:02, 4.32it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 62%|██████████████████████████▍ | 16/26 [00:03<00:02, 4.36it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 65%|████████████████████████████ | 17/26 [00:03<00:02, 4.39it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 69%|█████████████████████████████▊ | 18/26 [00:04<00:01, 4.43it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 73%|███████████████████████████████▍ | 19/26 [00:04<00:01, 4.41it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 77%|█████████████████████████████████ | 20/26 [00:04<00:01, 4.41it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 81%|██████████████████████████████████▋ | 21/26 [00:04<00:01, 4.41it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 85%|████████████████████████████████████▍ | 22/26 [00:05<00:00, 4.37it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 88%|██████████████████████████████████████ | 23/26 [00:05<00:00, 4.37it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 92%|███████████████████████████████████████▋ | 24/26 [00:05<00:00, 4.41it/s]<yolo_pipeline.yolo_tf object at 0x7fd241cd0860> 96%|█████████████████████████████████████████▎ | 25/26 [00:05<00:00, 4.37it/s] [MoviePy] Done. [MoviePy] >>>> Video ready: /home/shuoyuan/examples/project_YOLO.mp4
Fatal Python error: PyImport_GetModuleDict: no module dictionary!
Thread 0x00007fd1825fd700 (most recent call first): File "/home/shuoyuan/anaconda3/lib/python3.6/site-packages/tqdm/_tqdm.py", line 97 in run File "/home/shuoyuan/anaconda3/lib/python3.6/threading.py", line 916 in _bootstrap_inner File "/home/shuoyuan/anaconda3/lib/python3.6/threading.py", line 884 in _bootstrap
Current thread 0x00007fd294d3d700 (most recent call first): File "/home/shuoyuan/anaconda3/lib/python3.6/site-packages/moviepy/video/io/VideoFileClip.py", line 116 in del
The code is here https://github.com/JunshengFu/vehicle-detection
When I am writing a video clip, I get this error. It originates from VideoFileClip.py in the del method.