ayumiymk / aster.pytorch

ASTER in Pytorch
MIT License
665 stars 169 forks source link

lmdb.Error: /share/zhui/reg_dataset/NIPS2014: No such file or directory #27

Closed Cabbagehust2507 closed 4 years ago

Cabbagehust2507 commented 4 years ago

When I clone this project and run it. I had this error, and I haven't known how to fix it now. I hope you can help me with this issue. Thank you so much! ---***---- Traceback (most recent call last): File "main.py", line 230, in main(args) File "main.py", line 127, in main args.height, args.width, args.batch_size, args.workers, True, args.keep_ratio) File "main.py", line 39, in get_data dataset_list.append(LmdbDataset(datadir, voc_type, max_len, num_samples)) File "/home/wanna/Documents/aster.pytorch/lib/datasets/dataset.py", line 53, in init self.env = lmdb.open(root, max_readers=32, readonly=True) lmdb.Error: /share/zhui/reg_dataset/NIPS2014: No such file or directory

shaohailin commented 4 years ago

@Cabbagehust2507 i have same error .could you tell me how to solve it?

Cabbagehust2507 commented 4 years ago

@shaohailin you replace this with directory address to your data. example you have 103 put in "./abc/img" and a text file contain labels gt.txt put in". /abc". After that, you make your own data with file create_svtp*.py in "lib/tools". ex: output of that you put in "./data/lmdb/data_train". This code only run with data in .lmdb. See Readme to know more information. So you replace with "./data/lmdb/data_train" You can try, and feel free to ask me if you meet any troubles. :)

shaohailin commented 4 years ago

@Cabbagehust2507 1. i want to know if it is necessary to create the lmdb of the training set ( synth90k and synthtext ) and the 7 test datasets. 2. if it is necessary, for example, for test dataset(IIIT5_3000), only choose test data of III5_3000 and lexicon.txt to create .lmdb or choose both train image and test image to create .lmdb? sorry to bother you!

shaohailin commented 4 years ago

图片 there are two .txt files , what should i do to create gt.file @Cabbagehust2507

Cabbagehust2507 commented 4 years ago

@Cabbagehust2507 1. i want to know if it is necessary to create the lmdb of the training set ( synth90k and synthtext ) and the 7 test datasets. 2. if it is necessary, for example, for test dataset(IIIT5_3000), only choose test data of III5_3000 and lexicon.txt to create .lmdb or choose both train image and test image to create .lmdb? sorry to bother you! Hi @shaohailin,(1) this source code require dataset need to process into lmdb file. (2) all train and test datasets need to create lmdb file. But you will create them separately: train_output(.lmdb) and test_output(.lmdb)

Cabbagehust2507 commented 4 years ago

图片 there are two .txt files , what should i do to create gt.file @Cabbagehust2507

I don't use this data. But gt.txt file is the file contains two columns: 1st column contains the image_name, 2nd column contains label of those images. You can see it clearly in "aster.pytorch/lib/tools/create_svtp_lmdb.py "

shaohailin commented 4 years ago

thanks for your reply! I have create the gt.txt file of iiit5k dataset according to your introduction about gt.txt file . Then I successfully transform it into lmdb.thank you very much! @Cabbagehust2507

Cabbagehust2507 commented 4 years ago

@shaohailin You're welcome :)

hrx000 commented 1 year ago

When I clone this project and run it. I had this error, and I haven't known how to fix it now. I hope you can help me with this issue. Thank you so much! ---*---- Traceback (most recent call last): File "main.py", line 230, in main(args) File "main.py", line 127, in main args.height, args.width, args.batch_size, args.workers, True, args.keep_ratio) File "main.py", line 39, in get_data dataset_list.append(LmdbDataset(datadir, voc_type, max_len, num_samples)) File "/home/wanna/Documents/aster.pytorch/lib/datasets/dataset.py", line 53, in init** self.env = lmdb.open(root, max_readers=32, readonly=True) lmdb.Error: /share/zhui/reg_dataset/NIPS2014: No such file or directory

Hey cai i am getting this error while training a diffusion model. Screenshot (100) where ./ffhq.lmdb/ffhq/1. png directory consisting of my dataset. and ffhq is just a folder for storing output.

YooWang commented 1 year ago

I thought you can check the path to file.

Rohit Kumar @.***>于2022年11月18日 周五17:06写道:

When I clone this project and run it. I had this error, and I haven't known how to fix it now. I hope you can help me with this issue. Thank you so much! ---**---- Traceback (most recent call last): File "main.py", line 230, in main(args) File "main.py", line 127, in main args.height, args.width, args.batch_size, args.workers, True, args.keep_ratio) File "main.py", line 39, in get_data dataset_list.append(LmdbDataset(datadir, voc_type, max_len, num_samples)) File "/home/wanna/Documents/aster.pytorch/lib/datasets/dataset.py", line 53, in init* self.env = lmdb.open(root, max_readers=32, readonly=True) lmdb.Error: /share/zhui/reg_dataset/NIPS2014: No such file or directory

Hey cai i am getting this error while training a diffusion model. [image: Screenshot (100)] https://user-images.githubusercontent.com/51284717/202651659-43a46ca1-5125-48d2-9910-1d955e3eabb9.png where ./ffhq.lmdb/ffhq/1. png directory consisting of my dataset. and ffhq is just a folder for storing output.

— Reply to this email directly, view it on GitHub https://github.com/ayumiymk/aster.pytorch/issues/27#issuecomment-1319670694, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOF47ROWZDVFHMD44DG6GG3WI42HVANCNFSM4JDLQ5NQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>