We are optimizing a pandas pipeline which process 300 million records on a Intel Xeon machine with 2 sockets and each socket having 32 physical cores. The cores are hyper threaded and hence the system has an overall logical cores of 128 (2322). The physical core utilization while executing the workload is ranging from 10 to 15%.
Stats about data
Some columns have less variation in values. For example State or City column. All the 300 million records have few options only available.
Some other columns have much higher variations. For example 300 million records leading to 1 million different column values.
What was tried out
Installed modin and imported modin.pandas as pd - No major improvement in timing
Added the line ray.init(ignore_reinit_error=True, num_cpus=64) which improved performance by 15 to 20%. The data loading time came down considerably because of high number of threads.
Tried adjusting cfg.NPartitions.put and cfg.RangePartitioning.put. The performance is degrading and finding the balance looks very tough.
Tried adjusting the values of num_cpus=32 which lead to slight degradation in performance. Tried even numa pinning to limit just to one socket in the two socket system.
Not sure what else can be done to improve the core utilization other than running multiple jobs concurrently or changing the code to take control of the cores internally in the program.
Expecting the core utilization to be at least 70 to 80% and job taking 20% of current time.
INSTALLED VERSIONS
commit : 3e951a63084a9cbfd5e73f6f36653ee12d2a2bfa
python : 3.10.12.final.0
python-bits : 64
OS : Linux
OS-release : 6.8.0-40-generic
Version : #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
We are optimizing a pandas pipeline which process 300 million records on a Intel Xeon machine with 2 sockets and each socket having 32 physical cores. The cores are hyper threaded and hence the system has an overall logical cores of 128 (2322). The physical core utilization while executing the workload is ranging from 10 to 15%.
Stats about data
What was tried out
Not sure what else can be done to improve the core utilization other than running multiple jobs concurrently or changing the code to take control of the cores internally in the program.
Expecting the core utilization to be at least 70 to 80% and job taking 20% of current time.
INSTALLED VERSIONS
commit : 3e951a63084a9cbfd5e73f6f36653ee12d2a2bfa python : 3.10.12.final.0 python-bits : 64 OS : Linux OS-release : 6.8.0-40-generic Version : #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8
Modin dependencies
modin : 0.32.0 ray : 2.34.0 dask : 2024.7.0 distributed : 2024.7.0
pandas dependencies
pandas : 2.2.2 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 70.1.0 pip : 24.2 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.4 IPython : 8.26.0 pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 bottleneck : None dataframe-api-compat : None fastparquet : 2024.5.0 fsspec : 2024.3.1 gcsfs : None matplotlib : 3.8.4 numba : 0.60.0 numexpr : None odfpy : None openpyxl : 3.1.5 pandas_gbq : None pyarrow : 16.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.13.1 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None