Closed binga closed 7 years ago
Hi @binga!
I have only 2 cores on my laptop and it seems that TF use all of them at 100%.
Maybe it depends on blas/atlas compilation options?
But indeed, you can pass intra_op_parallelism_threads
param into TFFM constructor in a similar way as in gpu_benchmark.ipynb
:
nb_thread = 4
config = tf.ConfigProto(intra_op_parallelism_threads=nb_thread, allow_soft_placement=True)
model = TFFMClassifier(
order=3,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=100,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='dense',
session_config=config,
log_dir='./tmp'
)
Please re-open if the problem is still not resolved.
Does tffm support multi-threading? While running tffm, all the cores on my machine are not running at 100% CPU. Is there anyway to add multi-threading at tf-level into tffm just like how it's configured on keras here: https://github.com/fchollet/keras/blob/master/keras/backend/tensorflow_backend.py#L105 ? Thank you.