yosinski / deep-visualization-toolbox

DeepVis Toolbox
http://yosinski.com/deepvis
MIT License
4.02k stars 924 forks source link

how to use the crop_max_patches.py? #109

Open visonpon opened 7 years ago

visonpon commented 7 years ago

how to use the crop_max_patches.py? can you give examples of the script usage like use the optimize_image.py script? @yosinski

visonpon commented 7 years ago

assert args.do_maxes or args.do_deconv……,'specify at least one do_*option to output.' what's that mean to specify? @yosinski

tsungjenh commented 7 years ago

@visonpon Im stuck at find_max_acts.py. Did you successfully run it? If I use cifar10 dataset do I need to transform the binary to jpg then put it all together in to a folder before I run this script or? And also what does it mean parser.add_argument('outfile', type = str, help = 'output filename for pkl')?? Please help!!!

heitorrapela commented 7 years ago

"@visonpon can you give examples of the script usage like use the optimize_image.py script?" Answer this question, you can see this link: ./optimize_image.py --decay 0.0001 --blur-radius 1.0 --blur-every 4 --max-iter 1000 --lr-policy constant --lr-params "{'lr': 100.0}"

You can change the parameters too. There are many parameters like place to save (defult: optimize_results/opt, always change this parameter, or you will not generate the desired output, by an error of trying to overwrite previous files - I get this error after 1k ite, something like one hour running the script) and neuron to pick up (default: 130)... I'm using data-size (224,224) to run the googLeNet and if you have some errors you can try stackoverflow like this one link, where you need to modify this file: caffe/python/caffe/io.py

For your first question, i'm trying to run it... xD

heitorrapela commented 7 years ago

@visonpon

  1. how to use the crop_max_patches.py? can you give examples of the script usage like use the optimize_image.py script?
    • python crop_max_patches.py output.pkl '../models/bvlc-googlenet/deploy.prototxt' '../models/bvlc-googlenet/bvlc_googlenet.caffemodel' '../input_images/' 'out.txt' 'outputFolder' 'pool5/7x7_s1' --do-deconv --do-maxes --N 9
    • "Must be" : positional arguments: (List generated with python crop_max_patches --h) nmt_pkl Which pickled NetMaxTracker to load. (output of the find_max_acts.py) net_prototxt Network prototxt to load (see the script) net_weights Network weights to load (see the script) datadir Directory to look for files in (Folder to input images) filelist List of image files to consider, one per line. (To understantd that I saw that in the code you need something like "filename ouf input image" + "label", so I created a .txt file with each image and label per line )
  2. assert args.do_maxes or args.dodeconv……,'specify at least one do*option to output.' what's that mean to specify? --do-maxes Output max patches. --do-deconv Output deconv patches. --do-deconv-norm Output deconv-norm patches. --do-backprop Output backprop patches. --do-backprop-norm Output backprop-norm patches. --do-info Output info file containing max filenames and labels. My choice was --do-deconv --do-maxes with --N 9

@pawanhsu
"Im stuck at find_max_acts.py. Did you successfully run it? If I use cifar10 dataset do I need to transform the binary to jpg then put it all together in to a folder before I run this script or? And also what does it mean parser.add_argument('outfile', type = str, help = 'output filename for pkl')?? Please help!!!"

  1. You need to use the .jpg image that you feed your net.
  2. parser.add_argument('outfile', type = str, help = 'output filename for pkl') - (output of the find_max_acts.py)

Some results that i'm working with googLeNet - Output of pool5/7x7_s1 (I stop the script in the begin because I need to go out, it generated something like 500+- images from 9.5k+-, .

Unit0/deconv_002.jpg - Unit0/deconv_002.jpg Unit0/maxim_002.jpg - Unit0/maxim_002.jpg Unit0051/deconv_004.jpg - Unit0051/deconv_004.jpg Unit0051/maxim_004.jpg - Unit0051/maxim_004.jpg

Run loading...