Accenture / AmpliGraph

Python library for Representation Learning on Knowledge Graphs https://docs.ampligraph.org
Apache License 2.0
2.16k stars 250 forks source link

Limit the number of threads used when fitting a model #199

Closed wradstok closed 4 years ago

wradstok commented 4 years ago

Background and Context Recently, I got an email from my local sysadmin asking why I was creating ~300 threads for every script I was running. Apparently the load was slowing down other users, which was not considered a very nice thing to do :).

It would be nice if there was a way to limit the number of threads that ampligraph (or tensorflow?) uses when fitting a model. I already tried setting the tensorflow intra_op_parallelism_threads and inter_op_parallelism_threads settings, but that did not make a difference.

sumitpai commented 4 years ago

Are you running tensorflow-gpu or cpu only version?

wradstok commented 4 years ago

I'm running the CPU only version

Op di 16 jun. 2020 om 10:15 schreef sumitpai notifications@github.com:

Are you running tensorflow-gpu or cpu only version?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Accenture/AmpliGraph/issues/199#issuecomment-644610289, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA75SN5VPENGVW4EHH5BJADRW4S37ANCNFSM4NOW3NEQ .

sumitpai commented 4 years ago

did you pip install ampligraph or did you install from sources? If from sources, can you try setting the following in the constructor of Embedding Models.py:

...
self.tf_config = tf.ConfigProto(allow_soft_placement=True)
# Add these two lines in the constructor of Embedding Models.py
self.tf_config.intra_op_parallelism_threads = 1
self.tf_config.inter_op_parallelism_threads = 1
...
wradstok commented 4 years ago

Seems to be working, made the modification you listed and now CPU usage stays pegged at 100% as expected.

Thanks for the help!

sumitpai commented 4 years ago

sorry for the delay in answering.