Hi! May I ask about the relationship between multi CPU cores and SparkNet?
I wanted to find out the relationship between number of CPUs and the running time. So I ran the CifarApp with spark-submit --master local command. I thought that it would run CifarApp in only one physical CPU core because the Spark official website says that --master local will "Run Spark locally with one worker thread (i.e. no parallelism at all)."
However when I checked with top command, all of my 8 physical cores were being used. At first I thought I might gave wrong options. So I tried various options but every time Spark used all 8 physical cores.
I also tried making a Spark standalone cluster in my desktop, putting 8 worker instances with SPARK_WORKER_INSTANCES=8 and SPARK_WORKER_CORES=1 and then running SparkNet on only 1 worker. But still Spark used all of my 8 physical cores.
In comparison, when I tried with the Pi Example in the official Spark site, Spark used only 1 physical core. I used the word 'physical cores' here because I found that Spark can have a lot of virtual cores. So in my case, # physical core = 8 and # virtual core = 1, and all physical cores are being used by one virtual core.
Until now I have two possible explanations about this phenomenon: (1) Even if Spark allocates only 1 thread to SparkNet, SparkNet will use all available physical cores. (2) Spark will use all available physical cores whenever it can. SparkNet was in this situation but Pi Example was not.
Hi! May I ask about the relationship between multi CPU cores and SparkNet?
I wanted to find out the relationship between number of CPUs and the running time. So I ran the
CifarApp
withspark-submit --master local
command. I thought that it would runCifarApp
in only one physical CPU core because the Spark official website says that--master local
will "Run Spark locally with one worker thread (i.e. no parallelism at all)."However when I checked with
top
command, all of my 8 physical cores were being used. At first I thought I might gave wrong options. So I tried various options but every time Spark used all 8 physical cores.I also tried making a Spark standalone cluster in my desktop, putting 8 worker instances with
SPARK_WORKER_INSTANCES=8
andSPARK_WORKER_CORES=1
and then running SparkNet on only 1 worker. But still Spark used all of my 8 physical cores.In comparison, when I tried with the Pi Example in the official Spark site, Spark used only 1 physical core. I used the word 'physical cores' here because I found that Spark can have a lot of virtual cores. So in my case,
# physical core = 8
and# virtual core = 1
, and all physical cores are being used by one virtual core.Until now I have two possible explanations about this phenomenon: (1) Even if Spark allocates only 1 thread to SparkNet, SparkNet will use all available physical cores. (2) Spark will use all available physical cores whenever it can. SparkNet was in this situation but Pi Example was not.
Would you help me? Thank you for your help!