Open shaomeng opened 1 year ago
A runtime comparison using a command-line utility SPERR. The same software is compiled on both x86 and Arm64 macbooks, and the runtime is collected on an M1 macbook. Runtime comparison with a series of lossy compression error tolerance levels is in the following graph. It shows that the performance difference is in the 20% to 35% range.
The raw numbers are kept in this google sheet.
@shaomeng Is CPU based data compression the right metric to be comparing? As @stasj mentioned, our data processing mostly happens on the GPU, which is CPU agnostic.
@shaomeng Is CPU based data compression the right metric to be comparing?
Not sure. This is the best data point I have right now.
I'm not sure if the current offering of M1 build is native or not. @sgpearse can help clarify, and close the issue if it's already native M1 build.
The current Apple M1 installer is still targeting the X86 architecture. Relying on Apple's virtualization/instruction translation technology must have a pretty big performance hit. Providing a native build targeting the M1 architecture must bring a pretty significant performance boost.
Note: I don't have concrete numbers on how big the actual performance difference is. It'd be interesting to use an existing software with both X86 and M1 builds to quantify the performance difference.