NVlabs / imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Other
4k stars 448 forks source link

Found 0 sequences #150

Closed Quanta-of-solitude closed 2 years ago

Quanta-of-solitude commented 2 years ago

I have arranged the data as required and split it into train and validation sets. Like:

train(340+ video frames):

        -data_root
             -images
                 -seq0001
                     -00000.jpg
                     -00001.jpg
                 -seq0002
                      -0000.jpg
                      -0001.jpg
               .........

              -landmarks-dlib68
                    -seq0001
                     -00000.json
                     -00001.json
                 -seq0002
                      -0000.json
                      -0001.json
               .........

val(101 video frames):

         -data_val
              -images
                 -seq0001
                     -00000.jpg
                     -00001.jpg
                 -seq0002
                      -0000.jpg
                      -0001.jpg
               .........

              -landmarks-dlib68
                 -seq0001
                     -00000.json
                     -00001.json
                 -seq0002
                      -0000.json
                      -0001.json
               .........

Used:

python scripts/build_lmdb.py --config configs/projects/fs_vid2vid/my_dataset/ampO1.yaml --data_root data_root/ --output_root datasets/my_dataset/lmdb/[train | val] --paired

This made files in the structure:

           -train
                 -all_filenames.json
                 -metadata.json
                 -images
                         -data.mdb
                         -lock.mdb
                 -landmarks-dlib68
                         -data.mdb
                         -lock.mdb

Changed the config file's input LMDB path train to: datasets/my_dataset/lmdb/train and val to datasets/my_dataset/lmdb/val

But, after I run the training command I get:

Using random seed 2
Training with 1 GPUs.
Make folder logs/
cudnn benchmark: True
cudnn deterministic: False
Creating metadata
['images', 'landmarks-dlib68']
Data file extensions: {'images': 'jpg', 'landmarks-dlib68': 'json'}
Searching in dir: images
Found 0 sequences
Found 0 files
Folder at datasets/my_dataset/lmdb/train/images opened.
Folder at datasets/my_dataset/lmdb/train/landmarks-dlib68 opened.
Num datasets: 1
Num sequences: 0
Max sequence length: 0
Requested sequence length (30) + few shot K (1) > max sequence length (0). 
Reduced sequence length to -1
Epoch length: 0
Creating metadata
['images', 'landmarks-dlib68']
Data file extensions: {'images': 'jpg', 'landmarks-dlib68': 'json'}
Searching in dir: images
Found 0 sequences
Found 0 files
Folder at datasets/my_dataset/lmdb/val/images opened.
Folder at datasets/my_dataset/lmdb/val/landmarks-dlib68 opened.
Num datasets: 1
Num sequences: 0
Max sequence length: 0
Requested sequence length (2) + few shot K (1) > max sequence length (0). 
Reduced sequence length to -1
Epoch length: 0
Train dataset length: 0
Val dataset length: 0

So, what is causing this error? Any help will be appreciated.

Skyrelixa commented 2 years ago

@Quanta-of-solitude Hello! Find any solution to this issue yet? I am also encountering the same problem now.

Quanta-of-solitude commented 2 years ago

@Skyrelixa nope