xinario / SAGAN

Sharpness-aware Low Dose CT Denoising Using Conditional Generative Adversarial Network
124 stars 31 forks source link

GPU issue #7

Open guylsan opened 6 years ago

guylsan commented 6 years ago

Hi, I have an issue when i tap this command "python pre_process.py -s 1 -i ./dicoms -o ./datasets/experiment/test" , prompt respond " warnings.warn(msg) 0 files in the folder" while i've lot of dicoms format images., and the root is good. Do you ve any idea of the problem ? My laptop have only a AMD GPU... Is it possible this made the error ?

There is a solution or a modification to run it with CPU ? Thanks a lot

xinario commented 6 years ago

Preprocessing doesn't need a gpu. The warning says it doesn't find any images in the given folder. So please check if your .dicom images are inside ./dicoms folder.

guylsan commented 6 years ago

Yes they are in this folder :( !

xinario commented 6 years ago

It's hard to find out what's gone wrong in this case unless you post a snapshot of your folder structures.

I've updated my code for pre- and post-processing. You can try it out again to see if the warning still exists.

guylsan commented 6 years ago

It's working, thanks but now i Have another problem when i tape DATA_ROOT=./datasets/experiment name=SAGAN which_direction=AtoB phase=test th test.lua

i got this error "guylain@guylain-HP-Pavilion-17-Notebook-PC:~/torch/torch-hdf5/SAGAN$ DATA_ROOT=./datasets/experiment name=SAGAN which_direction=AtoB phase=test th test.lua { input_nc : 3 results_dir : "./results/" name : "SAGAN" batchSize : 1 phase : "test" fineSize : 512 aspect_ratio : 1 how_many : "all" gpu : 1 nThreads : 1 DATA_ROOT : "./datasets/experiment" serial_batch_iter : 1 preprocess : "regular" norm : "batch" which_epoch : "latest" which_direction : "AtoB" cudnn : 1 serial_batches : 1 display : 0 output_nc : 3 loadSize : 512 checkpoints_dir : "./checkpoints" display_id : 200 flip : 0 } Random Seed: 292

threads...1

Starting donkey with id: 1 seed: 293 table: 0x40c9e3d0 ./datasets/experiment trainCache /home/guylain/torch/torch-hdf5/SAGAN/cache/_home_guylain_torch_torch-hdf5_SAGAN_datasets_experiment_test_trainCache.t7 Creating train metadata serial batch:, 1 table: 0x40c9ba58 running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class now combine all the files to a single large file load the large concatenated list of sample paths to self.imagePath cmd..wc -L '/tmp/lua_owgTbK' |cut -f1 -d' ' 62 samples found....... 0/62 .......................] ETA: 0ms | Step: 0ms
Updating classList and imageClass appropriately [===================== 1/1 =======================>] Tot: 0ms | Step: 0ms
Cleaning up temporary files Dataset Size: 62
checkpoints_dir ./checkpoints
/home/guylain/torch/install/bin/luajit: /home/guylain/torch/install/share/lua/5.1/trepl/init.lua:389: module 'cudnn' not found:No LuaRocks module found for cudnn no field package.preload['cudnn'] no file '/home/guylain/.luarocks/share/lua/5.1/cudnn.lua' no file '/home/guylain/.luarocks/share/lua/5.1/cudnn/init.lua' no file '/home/guylain/torch/install/share/lua/5.1/cudnn.lua' no file '/home/guylain/torch/install/share/lua/5.1/cudnn/init.lua' no file './cudnn.lua' no file '/home/guylain/torch/install/share/luajit-2.1.0-beta1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn/init.lua' no file '/home/guylain/.luarocks/lib/lua/5.1/cudnn.so' no file '/home/guylain/torch/install/lib/lua/5.1/cudnn.so' no file '/home/guylain/torch/install/lib/cudnn.so' no file './cudnn.so' no file '/usr/local/lib/lua/5.1/cudnn.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'error' /home/guylain/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require' /home/guylain/torch/torch-hdf5/SAGAN/util/util.lua:187: in function 'load' test.lua:79: in main chunk [C]: in function 'dofile' ...lain/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50 " and i don't find on the internet a solution for this issues Thanks you for everything