intel / neural-speed

An innovative library for efficient LLM inference via low-bit quantization
https://github.com/intel/neural-speed
Apache License 2.0
342 stars 35 forks source link

Distributing tensors across NUMA nodes #207

Open shg8 opened 5 months ago

shg8 commented 5 months ago

I'm wondering how much support Neural Speed has for NUMA systems. The Advanced Usage page suggests that all tensors should be allocated on the first NUMA node numactl -m 0 -C 0-<physic_cores-1>. Is there any benefit to doing this?

kevinintel commented 5 months ago

Without numa, the performance will drop a lot

shg8 commented 5 months ago

Without numa, the performance will drop a lot

I previously thought that this binds all memory allocations to the first NUMA node. However, this would increase internode traffic significantly. Additionally, each thread isn't able to fully utilize the memory bandwidth if the topology has different memory affinities for different nodes. Is my understanding correct? Could you kindly add a bit more to why we're not interleaving the memory allocations?

kevinintel commented 5 months ago

Intel Xeon offen has 2 sockets, -m 0 aimed to bind the memory in first socket. There are overhead of communcation between 2 sockets, if you want to reduce internode, you can try our TP.