Closed Acmenwangtuo closed 4 years ago
I would like to reproduce this. How many GPUs do you have? And what model? Can we see an example image?
I have one Tesla v100 32G,the data in https://monuseg.grand-challenge.org/Data/
As you see,I want to detect the center of nuclei
That GPU should be enough. You must have converted the groundtruth of that data to a CSV file that the "locating-objects-without-bboxes" project can read, with a location for each nuclei center. Can you please upload that CSV file somewhere?
Yeah,I have generated the csv file,it at https://drive.google.com/open?id=19TTTPlYZCIHmGglrLzg33Xpi9uMHEAim
I'm also going to need:
train.py
, so I can reproduce the same hyperparameters.Yeah,it actually a part of the dataset,it only has 16 images,the rest data i will use to test,the parameter i use is as same as you provided,except the image size is 1000x1000,it about 8 mintues one image
I'm still going to need items 1 and 2 from my previous message.
The complete gt.csv is https://drive.google.com/open?id=1CrR2xElG9npVNW_TcIf3-gihHInQC6Hv And the script is python -m object-locator.train --train-dir ./traindata --batch-size 4 --visdom-env mytrainsession --visdom-server localhost --lr 1e-3 --val-dir ./traindata --optim Adam --save saved_new_model.ckpt --imgsize 1000x1000 --val-freq 100 --epochs 200
I cannot reproduce this error yet because I get an out of memory error when running your command, even setting --batch-size 1
. This is probably because my GPU only has 12 GB. Your input image size of 1000x1000 yields a CNN of 125 M parameters (this is shown when you run train.py
), which seems pretty large.
How slow is validation if you run it with 256x256 so that I can reproduce it?
Yeah,I have met the same question with you,so I resize the image to 256x256,it still very slow when validate,about a few minutes one images with low recall and accuracy
Please post the full standard output of your training log to https://pastebin.com/ and let us have a look.
Closing due to inactivity and lack of info. Feel free to reopen if you show us know the training log.
When I run train.py on my own data, it will cost a long time when validates with with very low gpu untils,I wanna know why and my data is 1000x1000 about thousands of object on each image