pjreddie / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
25.71k stars 21.32k forks source link

cannot load image #611

Open pjryan513 opened 6 years ago

pjryan513 commented 6 years ago

I am training on my own dataset and I keep getting the error: "annot load image" for the .jpg files I want to train on and Couldn't open file for the .txt files for the bounding box info. I was following the advice of a similar question here. I added the images and labels folders putting the .jpg in images and the .txt in labels. I still get the same errors though and have tried both yolo2 and yolo3.

my obj.data file is called phallusia.data, location is darknet/data/phallusia.data and the contents are

classes= 1
train = data/train.txt
valid = data/test.txt
names = data/phallusia.names
backup = backup/

my train.txt location is darknet/data/train.txt, the contents are

data/phallusia/images/v1scene00001.jpg
data/phallusia/images/v1scene00051.jpg
data/phallusia/images/v1scene00101.jpg

my test.txt location is darknet/data/test.txt, the contents are

data/phallusia/images/v4scene03051.jpg
data/phallusia/images/v4scene03101.jpg
data/phallusia/images/v4scene03151.jpg

my cfg/yolo-phallusia.cfg contains,

[net]
batch=64
subdivisions=8
height=416
width=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.0001
max_batches = 45000
policy=steps
steps=100,25000,35000
scales=10,.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

#######

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[route]
layers=-9

[reorg]
stride=2

[route]
layers=-1,-3

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=30
activation=linear

[region]
anchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52
bias_match=1
classes=1
coords=4
num=5
softmax=1
jitter=.2
rescore=1

object_scale=5
noobject_scale=1
class_scale=1
coord_scale=1

absolute=1
thresh = .6
random=0

the path for darkent is /home/chemistry/Patricks Work/v3/darknet, should I move it to home directory? the path for data folder is /home/chemistry/Patricks Work/v3/darknet/data Path to images is /home/chemistry/Patricks Work/v3/darknet/data/phallusia/images path to labels is /home/chemistry/Patricks Work/v3/darknet/data/phallusia/labels did I put the folders in the right areas?

Hopefully this is enough info to figure out what I am doing wrong.

kimdinhthaibk commented 5 years ago

For my case is related to two files train.txt and test.txt. Each line in this file has no spaces at the beginning of the line. Now it worked.

GLorieul commented 4 years ago

The fix proposed by @rpratesh in his post is most certainly related to the fact that Windows uses different EOL (End Of Line) characters than Linux. The data file from the original post comes from a tutorial by timebutt who uses Windows (as he specifies in the tutorial). Hence his train.txt and test.txt files use the Windows style of EOL.

This is visible when viewing the files with an hexadecimal file reader

me@machine:data/nfpa$ xxd train_good.txt | head
00000000: 6461 7461 2f6e 6670 612f 696d 6167 6573  data/nfpa/images
00000010: 2f70 6f73 2d31 2e6a 7067 0a64 6174 612f  /pos-1.jpg.data/
00000020: 6e66 7061 2f69 6d61 6765 732f 706f 732d  nfpa/images/pos-

me@machine:data/nfpa$ xxd train_timebutt.txt | head
00000000: 6461 7461 2f6e 6670 612f 696d 6167 6573  data/nfpa/images
00000010: 2f70 6f73 2d31 2e6a 7067 0d0a 6461 7461  /pos-1.jpg..data
00000020: 2f6e 6670 612f 696d 6167 6573 2f70 6f73  /nfpa/images/pos

Note how the 0a byte changes into 0d0a.

Similar issue was reported in Axeley's darknet issues.

snowuyl commented 4 years ago

yolov3_train_ok_Dos_shell I encounter "cannot load image" issue with cygwin. I have solved this issue with Windows command line (DOS) with the following procedure mentioned at the following link. https://github.com/AlexeyAB/darknet/tree/47c7af1cea5bbdedf1184963355e6418cb8b1b4f#how-to-train-pascal-voc-data

devb2020 commented 4 years ago

@ryokomy

Seems to be an issue with train.txt and test.txt files. I've recreated them on my own and it worked.

The default text files from https://timebutt.github.io/content/other/NFPA_dataset.zip seems to have some problem

I's encountering same issue. Read your post and re-created train.txt and included image paths manually which miraculously worked.. :)

Tomorrowfee commented 4 years ago

i have same issue: images .jpg convert into .png format. Fixed.

Finally, I found that these pics I rename their extension names cannot be loaded. I must save as a new .jpg by using picture editing tools, and they will be loaded sucessfully. Before you change the extension name, you must know the data encoding formats of different pics are different, such as png, jpg, gif....

AliPANPALLI commented 3 years ago

Kendi veri kümem üzerinde eğitim alıyorum ve sürekli olarak şu hatayı alıyorum: Eğitim yapmak istediğim .jpg dosyaları için "resim yükle" ve sınırlayıcı kutu bilgisi için .txt dosyaları için dosya açılamadı . Burada benzer bir sorunun tavsiyesine uyuyordum . .Jpg'yi resimlere ve .txt'yi etiketlere koyarak resim ve etiket klasörlerini ekledim. Yine de aynı hataları alıyorum ve hem yolo2 hem de yolo3'ü denedim.

obj.data dosyam phallusia.data , konum darknet / data / phallusia.data ve içeriği

classes= 1
train = data/train.txt
valid = data/test.txt
names = data/phallusia.names
backup = backup/

train.txt konumum darknet / data / train.txt , içerikler

data/phallusia/images/v1scene00001.jpg
data/phallusia/images/v1scene00051.jpg
data/phallusia/images/v1scene00101.jpg

test.txt konumum darknet / data / test.txt , içerikler

data/phallusia/images/v4scene03051.jpg
data/phallusia/images/v4scene03101.jpg
data/phallusia/images/v4scene03151.jpg

cfg / yolo-phallusia.cfg dosyam şunları içerir:

[net]
batch=64
subdivisions=8
height=416
width=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.0001
max_batches = 45000
policy=steps
steps=100,25000,35000
scales=10,.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

#######

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[route]
layers=-9

[reorg]
stride=2

[route]
layers=-1,-3

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=30
activation=linear

[region]
anchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52
bias_match=1
classes=1
coords=4
num=5
softmax=1
jitter=.2
rescore=1

object_scale=5
noobject_scale=1
class_scale=1
coord_scale=1

absolute=1
thresh = .6
random=0

darkent yolu / home / chemistry / Patricks Work / v3 / darknet , onu ana dizine taşımalı mıyım? olduğu klasör veriler için yol ev / kimya / Patricks İş / v3 / Darknet / veri / görüntülere Yoludur / home / kimya / Patricks İş / v3 / Darknet / veri / phallusia / images etiketlere yoludur / home / kimya / Patricks Work / v3 / darknet / data / phallusia / labels klasörleri doğru alanlara yerleştirdim mi?

Umarım bu, neyi yanlış yaptığımı anlamak için yeterli bilgidir.

I am having the same problem now. If the problem is solved, can you share the solution with me, thank you. Healthy days.

7c commented 3 years ago

make sure to break lines with \n instead of \r\n that solved my problem

eze1376 commented 3 years ago

make sure that the line ending of train.txt and test.txt match to you operation system. I changed line ending from Windows to Unix/Linux and it solved my problem!

shalini35 commented 3 years ago

The first thing you should check is that you have permission to edit/read the files necessary.

The solution of changing the files to include the absolute path worked on Mac for me. However, it did not solve the issue on Windows.

On Windows, I was running into the same error and the issue turned out to be the end of line sequences. Make sure that the end of line sequences are "/n" and not "/r/n". In a text editor (like visual studio code) make sure you have LF and not CRLF and also that the file format is UTF8.

Side Note: I also switched to this version of dark net https://github.com/pengdada/darknet-win-linux as it worked better on Linux.

I was also preparing the train.txt file on windows and \r\n was included in file. This was causing data read issue. Thanks for the help.

ryanalexmartin commented 3 years ago

For me, changing from /c/Users/ryana/code_win/darknet/data/cifar/train/4588_bird.png to C:/Users/~~~~.png did the trick on Windows/Powershell/WinGW.

Seems that on a Windows system, Powershell wants to use "C:/" instead of "/c/", who would have thought... 💀

Vitaris commented 2 years ago

Hi all, I encountered the same problem, after checking all paths, jpeg settings and anything you can imagine, I found that I had hidden two spaces after the .jpg I hope this hint will help. It could probably be fixed when loading the files

hidden_spaces