Closed hojat-kaveh closed 7 years ago
ok, I get a similar output like issue #15, and I don't know how to dapt jobParallel to fit my compute environment as @s-gupta has suggested.
I tried editing script_edges.m in "Testing code" section like this:
jobParam = struct('numThreads', 2, 'codeDir', pwd(), 'preamble', '', 'matlabpoolN', 1, 'globalVars', {{}}, 'fHandle', @empty, 'numOutputs', 1);
resourceParam = struct('mem', 3, 'hh', 1, 'numJobs', 50, 'ppn', 2, 'nodes', 3, 'logDir', '/home/hojat/Documents/application/rcnn-depth/results/log/pbsBatchDir/', 'queue', 'psi', 'notif', false, 'username', 'hojat', 'headNode', 'psi');
is it incorrect? and help is greatly appreciated.
jobParallel
sends the parameters to simplePBS
which should run shell script using system
function.
so the question is how can I make sure the shell script runs correctly?
Since this is a duplicate issue of issue #15, I'm closing this issue.
Hi I'm new to Caffe and deep learning. As stated here I'd like to use all the cores on my machine to achieve faster performance. I'm running the code on HP Envy 15t-j100 (4 real cores, 8 virtual cores). I want to know which part can be done in parallel? and also I want to know how I should go about this multi-threading?! Can you provide me with some resources?