Open rishucoding opened 1 year ago
Our goal is actually not to make ipex better than pytorch, but to contribute our optimizations into pytorch to make the pytorch community benefit from our optimizations out-of-box. While waiting for the PR merge, we would have the optimizations released through ipex first to benefit users. @zhuhaozhe would you share DLRM optimizations for other precisions?
Hello,
I am working on analyzing the DLRM(recommendation model) inference performance on CPUs. Recently, I found CPU performance optimization commits in the native pytorch. So, with the performance improvements coming in both native pytorch and Ipex, I am curious to know how well Ipex performs over native pytorch. Could you share any statistics on IPEX vs native pytorch? (Either for operators like EmbeddingBag or for complete DLRM inference?) Also, could you share if IPEX optimizes on all precision (fp32, bf16, int8)?
Thanks, Rishabh Jain