Open bend-n opened 7 months ago
Hi @bend-n,
I pulled down your PR and benchmarked it on the original benchmark machine.
I'm not sure if there is any discernible difference according to these plots. It may be an issue of the plots not accurately conveying any speedup, so perhaps only benchmarking phastft
before and after your change would be a better way to tell if there's a difference.
lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 7950X 16-Core Processor CPU family: 25 Model: 97 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 1 Stepping: 2 Frequency boost: enabled CPU(s) scaling MHz: 76% CPU max MHz: 5879.8818 CPU min MHz: 3000.0000 BogoMIPS: 8983.04 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d Virtualization: AMD-V L1d cache: 512 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 16 MiB (16 instances) L3 cache: 64 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected
sudo lshw -short -C memory H/W path Device Class Description ================================================================= /0/0 memory 64KiB BIOS /0/b memory 1MiB L1 cache /0/c memory 16MiB L2 cache /0/d memory 64MiB L3 cache /0/11 memory 64GiB System Memory /0/11/0 memory [empty] /0/11/1 memory 32GiB DIMM Synchronous Unbuffered (Unregistered) 6000 MHz (0.2 ns) /0/11/2 memory [empty] /0/11/3 memory 32GiB DIMM Synchronous Unbuffered (Unregistered) 6000 MHz (0.2 ns)
this is the difference between the two images:
theres some change here, in the phastft section-- i think its hard to tell, as this plot is relative to rustfft, and has fftw
in the plot, which makes it hard to see anything.
Hi @bend-n,
I'll try plotting the before/after in a different way now.
Thank you!
@bend-n
One thing that may be worth looking into is using an chunks exact zipped iterator similar to what RustFFT
implemented.
Unlike the standard library's std::slice::ChunksExact
implementation, what RustFFT
has seems to be a clever way to get around the use of div/mod.
I can't say whether this will have any effect on performance, so it may be worth profiling first.
Best, Saveliy
If you're willing to use more nightly things, as this PR proposes, I'd recommend as_chunks(_mut)
instead of any of the chunking iterators.
Basically, rather than using foo.chunks_exact(8)
, use let (foo_chunks, _ignored_tail) = foo.as_chunks::<8>();
and then you can iterate over the foo_chunkss: &[[T; N]]
using a slice iterator instead of a chunking iterator, and the slice iterators always get the most optimization attention of anything in the whole standard library.
(The div/mod issue is probably not relevant for anything that's .as_chunks::<$lanes>()
at least, since div/mod by a power of two is always optimized to bitops by LLVM when it sees the constant, and with as_chunks
it'll always see the constant.)
I don't know how useful it'd be for performance, though it certainly wouldn't hurt, but I think it'd be nice for the code, since instead of things like
re_s0.copy_from_slice((real_c0 + real_c1).as_array());
you could do
*re_s0 = (real_c0 + real_c1).as_array();
and have it be more obvious that there's no slice length concerns.
Hi @scottmcm,
I wasn't aware of as_chunks
/as_chunks_mut
, so thank you for bringing this to my attention! I just tried out your suggestion in godbolt. I only did a comparison view of fft_chunk_2
with/without as_chunks_mut
, and it looks like there's, potentially, some benefit. You can have a look for yourself here. Although I've been let down by LLVM MCA a lot in the past, it does hint at less cycles being used for your version.
Performance is certainly a feature, so anything we can do to help it would be great. At this point, we need to add benchmarks using Criterion to detect the performance changes.
Thank you for bringing this up and let me know your thoughts!
Best, Saveliy
I'll have to dig in more to try to figure out the implications. I didn't do any godbolt explorations before posting the above 🙃 If nothing else it's interesting that the assembly length is so different.
One thing that jumps out in the godbolt:
let (re_chunks, []) = reals.as_chunks_mut::<2>() else {
panic!("slice didn't have even length")
};
Note that this is a behavioural difference from the previous, since chunks_exact_mut(2)
will just ignore the tail if the length isn't a multiple of two. (Though you can get it from https://doc.rust-lang.org/std/slice/struct.ChunksExactMut.html#method.into_remainder if needed.)
But that's why I put
let (re_chunks, _) = reals.as_chunks_mut::<2>();
in my post above, for consistency with the chunks_exact_mut
behaviour.
It was unclear to me, from a quick scan of the code, what you expected of the lengths of the various slices. Using zip
also just silently uses the length of the shorter, which may or may not be intented -- I don't know.
Overall, you're probably right and the best first step is reliable benchmarks to validate things like this.
@scottmcm
Note sure if this is what you meant, but I just tried out, let (re_chunks, _) = reals.as_chunks_mut::<2>();
in lieu of what I had before:
#![feature(slice_as_chunks)]
pub fn fft_chunk_2(reals: &mut [f64], imags: &mut [f64]) {
let (re_chunks, _) = reals.as_chunks_mut::<2>();
let (im_chunks, _) = imags.as_chunks_mut::<2>();
re_chunks
.iter_mut()
.zip(im_chunks.iter_mut())
.for_each(|(reals_chunk, imags_chunk)| {
let z0_re = reals_chunk[0];
let z0_im = imags_chunk[0];
let z1_re = reals_chunk[1];
let z1_im = imags_chunk[1];
reals_chunk[0] = z0_re + z1_re;
imags_chunk[0] = z0_im + z1_im;
reals_chunk[1] = z0_re - z1_re;
imags_chunk[1] = z0_im - z1_im;
});
}
llvm-mca output (using -mcpu=znver4
):
Iterations: 100
Instructions: 6700
Total Cycles: 1151
Total uOps: 6700
Dispatch Width: 6
uOps Per Cycle: 5.82
IPC: 5.82
Block RThroughput: 11.2
Available via godbolt here.
The # of cycles is $\approx\frac{1}{3}$ of what it was before. Of course, the proof is in the pudding, so we'd have to see if that translates into any sort of real-world perf gains.
It was unclear to me, from a quick scan of the code, what you expected of the lengths of the various slices. Using zip also just silently uses the length of the shorter, which may or may not be intented -- I don't know.
I'm not sure what you meant here, but I'd be glad to do my best to explain any part of the code if you can provide further clarification.
Overall, you're probably right and the best first step is reliable benchmarks to validate things like this.
I agree -- adding in something for before/after comparisons using criterion (or perhaps another crate if you have suggestions) would be the best first step. In the interim, feel free to open a separate issue for your idea, and we'll track it there.
Thank you for your insight!
Best, Saveliy
old
new