Closed cdeepali closed 3 years ago
PyTorch currently does not package TBB headers, but it requires these at run time because it sets AT_PARALLEL_NATIVE_TBB
in supplied Config.h. Details - https://github.com/pytorch/pytorch/blob/b54072a6f9993e1d65836c6be095b30b42010065/.jenkins/pytorch/test.sh#L181-L184
So adding a patch to package TBB headers.
Checklist before submitting
Description
As of now there is a restriction to use
OMP_NUM_THREADS=1
with PyTorch to avoid following warnings and probable hangs.OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.
This however, makes trainings too slow.This PR enables usage of TBB for Intra-op Parallelism. https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
Resolves #34
Review process to land