xuebinqin / DIS

This is the repo for our new project Highly Accurate Dichotomous Image Segmentation
Apache License 2.0
2.19k stars 254 forks source link

finding 3000 training units but still saying num_samples =0 #29

Open jacksonhunter opened 2 years ago

jacksonhunter commented 2 years ago

This is amazing, but I'm having some trouble with DIS.

Sorry, i'm new at this. It's finding 3000 training units but still saying num_samples =0

Error:

d_inference_main.py /home/jakko/.conda/envs/pytorch18/lib/python3.7/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead. warnings.warn(warning.format(ret)) building model... batch size: 8 --- create training dataloader --- ------------------------------ train -------------------------------- --->>> train dataset 0 / 1 DIS5K-TR <<<--- -im- DIS5K-TR /home/jakko/Pictures/DIS5K/DIS5K/DIS-TR/im : 3000 -gt- DIS5K-TR /home/jakko/Pictures/DIS5K/DIS5K/DIS-TR/gt : 3000 Traceback (most recent call last): File "train_valid_inference_main.py", line 727, in hypar=hypar) File "train_valid_inference_main.py", line 541, in main shuffle = True) File "/home/jakko/Github/DIS/IS-Net/data_loader_cache.py", line 97, in create_dataloaders gos_dataloaders.append(DataLoader(gos_dataset, batch_size=batch_size, shuffle=shuffle, num_workers=numworkers)) File "/home/jakko/.conda/envs/pytorch18/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 266, in init sampler = RandomSampler(dataset, generator=generator) # type: ignore File "/home/jakko/.conda/envs/pytorch18/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 104, in init "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integer value, but got num_samples=0

--------------- STEP 1: Configuring the Train, Valid and Test datasets ---------------

## configure the train, valid and inference datasets
train_datasets, valid_datasets = [], []
dataset_1, dataset_1 = {}, {}

dataset_tr = {"name": "DIS5K-TR",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TR/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TR/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-TR"}

dataset_vd = {"name": "DIS5K-VD",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-VD/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-VD/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-VD"}

dataset_te1 = {"name": "DIS5K-TE1",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE1/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE1/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-TE1"}

dataset_te2 = {"name": "DIS5K-TE2",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE2/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE2/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-TE2"}

dataset_te3 = {"name": "DIS5K-TE3",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE3/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE3/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-TE3"}

dataset_te4 = {"name": "DIS5K-TE4",
             "im_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE4/im",
             "gt_dir": "/home/jakko/Pictures/DIS5K/DIS5K/DIS-TE4/gt",
             "im_ext": ".jpg",
             "gt_ext": ".png",
             "cache_dir":"../DIS5K-Cache/DIS-TE4"}
### test your own dataset
dataset_demo = {"name": "your-dataset",
             "im_dir": "../your-dataset/im",
             "gt_dir": "",
             "im_ext": ".jpg",
             "gt_ext": "",
             "cache_dir":"../your-dataset/cache"}

train_datasets = [dataset_tr] ## users can create mutiple dictionary for setting a list of datasets as training set
# valid_datasets = [dataset_vd] ## users can create mutiple dictionary for setting a list of datasets as vaidation sets or inference sets
valid_datasets = [dataset_vd] # dataset_vd, dataset_te1, dataset_te2, dataset_te3, dataset_te4] # and hypar["mode"] = "valid" for inference,

### --------------- STEP 2: Configuring the hyperparamters for Training, validation and inferencing ---------------
hypar = {}

## -- 2.1. configure the model saving or restoring path --
hypar["mode"] = "train"
## "train": for training,
## "valid": for validation and inferening,
## in "valid" mode, it will calculate the accuracy as well as save the prediciton results into the "hypar["valid_out_dir"]", which shouldn't be ""
## otherwise only accuracy will be calculated and no predictions will be saved
hypar["interm_sup"] = False ## in-dicate if activate intermediate feature supervision

if hypar["mode"] == "train":
    hypar["valid_out_dir"] = "" ## for "train" model leave it as "", for "valid"("inference") mode: set it according to your local directory
    hypar["model_path"] ="/home/jakko/Github/DIS/saved_models/your_model_weights" ## model weights saving (or restoring) path
    hypar["restore_model"] = "" ## name of the segmentation model weights .pth for resume training process from last stop or for the inferencing
    hypar["start_ite"] = 0 ## start iteration for the training, can be changed to match the restored training process
    hypar["gt_encoder_model"] = ""
else: ## configure the segmentation output path and the to-be-used model weights path
    hypar["valid_out_dir"] = "../your-results/"##"../DIS5K-Results-test" ## output inferenced segmentation maps into this fold
    hypar["model_path"] = "/home/jakko/Github/DIS/saved_models/your_model_weights" ## load trained weights from this path
    hypar["restore_model"] = "isnet.pth"##"isnet.pth" ## name of the to-be-loaded weights
jacksonhunter commented 2 years ago

I had to delete the cache folder from an earlier typpo

chasecjg commented 1 year ago

I had the same problem. Whether you solve this problem?

jacksonhunter commented 1 year ago

Yeah, delete the cache folder and try again.

chasecjg commented 1 year ago

Are you referring to removing the "cache_dir=../" in the file path?

jacksonhunter commented 1 year ago

Sorry about the lack of specifics... Going from memory here but no, that might be where to look, but it will create these image cache folders, and you have to delete those folders.

On Thu, Sep 15, 2022, 7:02 PM chase_cheng @.***> wrote:

Are you referring to removing the "cache_dir=../" in the file path?

— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/DIS/issues/29#issuecomment-1248828052, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4UIGUQ7HG4NGODPX6H65DV6PIJ7ANCNFSM57F3FKQA . You are receiving this because you authored the thread.Message ID: @.***>

chasecjg commented 1 year ago

Thanks very much. I'm sucess.

chasecjg commented 1 year ago

Are you referring to removing the "cache_dir=../" in the file path? I delete it ,but it's also error.

松风水月 @.***

 

------------------ 原始邮件 ------------------ 发件人: "xuebinqin/DIS" @.>; 发送时间: 2022年9月16日(星期五) 上午9:55 @.>; @.**@.>; 主题: Re: [xuebinqin/DIS] finding 3000 training units but still saying num_samples =0 (Issue #29)

Yeah, delete the cache folder and try again.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

dashuaigeyige commented 1 year ago

thanks