Open qiaoguan opened 6 years ago
I also have the same problem.I can't train my data for this,please someone teach me how to solve this problem,thanks.
I also met the same problem. For 5 class, I have tried 30 and 50 filters, but all failed.
@hsjimwang For the filters : it is num/3*(classes+5) But for the problem, try to decrease the subdivisions to 16 given batch is 64 and also make random=0
I had the same issue when trying to load a YOLO_v3 model with old darknet compilation (suitable for YOLO_v2)
Hi Trying to train 2 classes. Please help for below error
Loading weights from darknet19_448.conv.23...Done! Learning Rate: 0.0001, Momentum: 0.9, Decay: 0.0005 Loaded: 0.236922 seconds Segmentation fault (core dumped)
I am using system config: 3GB GPU, 32GB RAM, I5 processor, CUDA 9.0, openCV3.6
hey guys how did you solve the problem ?
I'm also getting the same error can any one solve it
I'm also getting the same error can any one solve it
Reduce the batch size and also try to decrease input image size
I keep getting this problem as well, when trying to train yolo3 with 19 classes on multiple gpus. will train for a while but then stop with the segmentation error... did you guys solve it?
It is quite possible that some of the annotations might be faulty and the might be going out of bound (x , y <0 > 1). Unfortunately, Original darknet repo doesn't handle these Exceptions. I used AlexeyAB Repo and it handled these exceptions very well and successfully trained a model for my Data.
also possible the parameter is not right, should be like ./darknet detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights data/dog.jpg
I was getting the same problem. After trying all the tricks stated above the problem persisted. That was may be because of memory. My dataset images were of high quality and size. I compressed them and it solved for me.
hey srushtiD13, my images are also of high quality and size, how did you compress them? I keep getting the errors: Cannot load image ... and Segmentation fault (core dumped)
Core dumped is sometimes due to access of memory which is not allowd to. There are multiple sites on internet that can compress our images for free. But the only problem is that they have limit and so it is tedious job! If you are trying YOLO on your machine I would suggest you to try training on Google Colab... It's easy and fast! NO memory issues!
I am training on a GeForce GTX 1080 with 8GB memory, that should be enough. (Ubuntu 16.04) I don't think I can get more than that for free on google colab. However, I did find out that within my training data are images that contain no objects. I filmed a short video with my iPhone at 30 fps, then using a simple python script I saved each frame as a jpg. Using Yolo_Mark I went through each jpg and marked my objects and some jpg did not contain any objects but I left them in the same folder. Now I am going through the folder and deleting the photos with no objects. Will update. In addition, some jpg txt contain object '1' before object '0', is that a problem? (I have only 2 objects)
UPDATE: a lot of images inside train.txt did not exist and that why I had the error. After going through all my pictures inside data/obj folder and verifying that only that pictures that are there are shown inside train.txt I could properly train the model. No errors.
I also had the same issue as AsternA
when i tried to use yolo v3, some error happpened, while its ok using yolo2