pjreddie / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
25.71k stars 21.32k forks source link

Couldn't open file: train/labelsImage_jpg.rf.8bdbe22463c3162f05180ff91e67f039.txt #2289

Open eltonfernando opened 4 years ago

eltonfernando commented 4 years ago

I'm training a custom date. I get this error

the strange thing is that this file (labelsImage_jpg.rf.8bdbe22463c3162f05180ff91e67f039.txt) is not on my train.txt list.

I also scanned my project folder for labelsimage_* , and that name doesn't exist anywhere. no idea where this is coming from.

sometimes the error changes Couldn't open file: sars-gettylabels-157005245_jpg.rf.5f33a6bd1f780b98e348507f3835d4d6.txt

sometimes runs for a few iterations and stop.

vischulisem commented 3 years ago

I am having this same error.

Issuing command to train custom detector(in darknet directory): !./darknet detector train data/obj.data cfg/custom-yolov4-detector.cfg yolov4.conv.137 -dont_show -map

This is what shows in command line: training_error.pdf

No jpg or txt file in train or validation with this name - data/obj/labelsImage_jpg.rf.65322d152e11c328217e630364f8e5f5.txt

I've searched my whole device, I have no idea where it is getting this from. If anyone knows how to resolve this, please let me know! :)

eltonfernando commented 3 years ago

Is your obj.data file something like this?

classes = 1
train = data/train.txt
valid = data/valid.txt
names = data/people.names
backup = backup/

do you have an image with that name? labelsImage_jpg.rf.65322d152e11c328217e630364f8e5f5 do you have an image with that name? if yes you need txt for her

vischulisem commented 3 years ago

That image does not exist within my test/train/or valid folders. I am not sure where it is coming from. Maybe a bug? This is my obj.data:

classes = 3

train = data/train.txt

valid = data/valid.txt

names = data/obj.names

backup = backup/

On Fri, Oct 30, 2020 at 5:39 PM Elton fernandes dos santos < notifications@github.com> wrote:

Is your obj.data file something like this?

classes = 1 train = data/train.txt valid = data/valid.txt names = data/people.names backup = backup/

do you have an image with that name? labelsImage_jpg.rf.65322d152e11c328217e630364f8e5f5 do you have an image with that name? if yes you need txt for her

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pjreddie/darknet/issues/2289#issuecomment-719809921, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMSFGE54FDMMPBMEET2EWELSNMXBBANCNFSM4RK2FLXQ .

eltonfernando commented 3 years ago

This could be an error in the .cfg file for 3 classes (yolov3)

this could be an error in the .cfg file

did you change the filters of the last layer to 24?and set classes for 3

[net]
# Testing
batch=1
subdivisions=1
# Training
# batch=64
# subdivisions=2
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 60000
policy=steps
steps=4800,4500
scales=.1,.1

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=1

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

###########

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=24
activation=linear

[yolo]
mask = 3,4,5
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=24
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 8

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=24
activation=linear

[yolo]
mask = 0,1,2
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=3
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
vischulisem commented 3 years ago

I am using Yolov4. For my dataset, I only have 2 classes (images with people wearing masks and people wearing no masks). So in obj.data I changed classes = 2. Still getting the same error message. This is my cfg file (attached). I've also included a link to my training configuration script if that helps... https://github.com/vischulisem/YOLOv4_Project/blob/master/training_config.py

On Fri, Oct 30, 2020 at 9:50 PM Elyse Vischulis vischulisem@gmail.com wrote:

I am using Yolov4. For my dataset, I only have 2 classes (images with people wearing masks and people wearing no masks). So in obj.data I changed classes = 2. Still getting the same error message. This is my cfg file (attached). I've also included a link to my training configuration script if that helps... https://github.com/vischulisem/YOLOv4_Project/blob/master/training_config.py

On Fri, Oct 30, 2020 at 9:17 PM Elton fernandes dos santos < notifications@github.com> wrote:

This could be an error in the .cfg file for 3 classes (yolov3)

this could be an error in the .cfg file

did you change the filters of the last layer to 24?and set classes for 3

[net] Testing

batch=1 subdivisions=1 Training batch=64 subdivisions=2

width=416 height=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

learning_rate=0.001 burn_in=1000 max_batches = 60000 policy=steps steps=4800,4500 scales=.1,.1

[convolutional] batch_normalize=1 filters=16 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=1

[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky

###########

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=24 activation=linear

[yolo] mask = 3,4,5 anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319 classes=24 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1

[route] layers = -4

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[upsample] stride=2

[route] layers = -1, 8

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=24 activation=linear

[yolo] mask = 0,1,2 anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319 classes=3 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pjreddie/darknet/issues/2289#issuecomment-719863201, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMSFGE2Z6UZVM2AVWA377TDSNNQRLANCNFSM4RK2FLXQ .

eltonfernando commented 3 years ago

I was unable to build using your script, it seems that it depends on several files yolov4-custom .cfg

can you show me yours cfg/custom-yolov4-detector.cfg?

vischulisem commented 3 years ago

custom-yolov4-detector.cfg

[net] batch=64 subdivisions=24 width=416 height=416 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue = .1

learning_rate=0.001 burn_in=1000 max_batches=4000 policy=steps steps=3200.0,3600.0 scales=.1,.1

cutmix=1

mosaic=1

:104x104 54:52x52 85:26x26 104:13x13 for 416

[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-7

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-10

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-28

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-28

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-16

[convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish

##########################

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

SPP

[maxpool] stride=1 size=5

[route] layers=-2

[maxpool] stride=1 size=9

[route] layers=-4

[maxpool] stride=1 size=13

[route] layers=-1,-3,-5,-6

End SPP

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[upsample] stride=2

[route] layers = 85

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[route] layers = -1, -3

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[upsample] stride=2

[route] layers = 54

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[route] layers = -1, -3

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

##########################

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

[route] layers = -4

[convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky

[route] layers = -1, -16

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

[route] layers = -4

[convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky

[route] layers = -1, -37

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

On Fri, Oct 30, 2020 at 10:25 PM Elton fernandes dos santos < notifications@github.com> wrote:

I was unable to build using your script, it seems that it depends on several files yolov4-custom .cfg

can you show me yours cfg/custom-yolov4-detector.cfg?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pjreddie/darknet/issues/2289#issuecomment-719870790, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMSFGE6HIMELC7M3P33SUGDSNNYRTANCNFSM4RK2FLXQ .

eltonfernando commented 3 years ago

ok, one observations

I believe you calculated max_batches = (classes * 2000) =4000

this is valid only when the result is greater than 6000

when you are less you ignore and use 6000

then it would stay

max_batches=6000

steps=4800,5400

4800=6000 0.8 # 80% 5400=6000 0.9 # 90%

vischulisem commented 3 years ago

Ah good observation. I changed max_batches and steps in config file...however still getting error:

Loading weights from yolov4.conv.137...Done!

Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005

Resizing

512

Couldn't open file: data/obj/labelsImage_jpg.rf.65322d152e11c328217e630364f8e5f5.txt

On Fri, Oct 30, 2020 at 10:51 PM Elton fernandes dos santos < notifications@github.com> wrote:

ok, two observations

I believe you calculated max_batches = (classes * 2000) =4000

this is valid only when the result is greater than 6000

when you are less you ignore and use 6000

then it would stay

max_batches=6000

steps=4800,5400

4800=6000 0.8 # 80% 5400=60000.9 # 90%

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pjreddie/darknet/issues/2289#issuecomment-719873059, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMSFGEY5XGBLXHTIMESLAATSNN3SVANCNFSM4RK2FLXQ .

eltonfernando commented 3 years ago

very strange, I have a question, you have 2 classes correct?

your file obj/data.data say 3 https://github.com/pjreddie/darknet/issues/2289#issuecomment-719835846 your file cfg/custom-yolov4-detector.cfg say 2

Ronales commented 3 years ago

may be you should noted the makefile file “opencv =0” and the label txt file "unix" not "windows" format in end-of-line.