Open thenfour opened 1 year ago
with the issue mostly understood, "under control", and no additional actions to be taken, closing.
reopening because I have something.
The built-in audio lib now has bandlimited waveforms, and are all via 32-bit fixed-point calculations. Some experimenting shows that this indeed results in more stable feedback and FM behavior. So it's what i was converging on anyway but was afraid it was too much work for an experiment. Well it works, and I'll be adapting this oscillator class to suit my needs. Bonus: it's probably much more performant.
when you push FM too far in FM8, the sound breaks up and eventually into almost white noise. Similar happens on clarinoid, but the transition from consonance to noise is not pretty at all and just sounds like you're mixing in noise gradually.
I suspect it's related both to the quality of waveforms, and the method of computation. Phase mgmt is probably necessary to keep the waveform stable and clean (see FM8's waveforms).
note that this little reaper JS demonstrates correct fm feedback, and it's what I believe I'm doing in clarinoid. this blends in 2 samples back as well, to help smooth breakup. A blend of about 50% of N-2 is the most stable.
update: with identical code, double-precision, the issue still exists. Actually the teensy waveform looks OK using an internal oscilloscope. #192 reveals that external oscilloscopes won't be reliable. Especially regarding feedback calculations, the internal calculations should all work fine.
This is tested with an isolated code example, sending audio data directly to the dac, not bommanoid application.
What I can say is that the principle is correct, but there's something deeper to fix. It may be how floating point is handled on the CPU itself. I don't think it's related to the 16-bit processing stage, because all this should be confined to the float calculation.
I could do waveform diffs or something but what would I get out of that? I guess I could narrow the error down to a single initial operation that causes the breakdown. Would I be able to attribute that to the issue? How would I even approach fixing it? It's not like I'm going to start using a different floating point library or something. Maybe some ARM-specific flags or modes which affect floating point calculations in certain ranges? Or related to denormalized values?
update: comparing identical code against x86, which looks and sounds as I expect, I see discrepancies. Tracking every calculation involved, the biggest culprits are multiplication &
std::sin
, but single operations in the worst case have 0.0001 (1E-4) error. That's kinda more than I would expect, but also within reasonable range for worst-case, especially when a trig operation is involved. I can see from tabular output that the feedback causes these errors to blow up.Nothing related to denormalization, but just lots of accumulating tiny errors.
So even if there is a fix, it's mostly a matter of luck that it would affect the behavior in a better way. It's a shame that the actual behavior is so chaotic though. Maybe there's a way to force the chaos out, like resetting the feedback every wave cycle.
update: resetting feedback every cycle seems to be more stable, but the issue is that cycles don't start on even sample boundaries, so the wave cycles are still not consistent. And that's a bit the issue: there would be no way to stabilize this unless cycles are on even sample boundaries, otherwise calculations will always diverge.
Aggressive oversampling might be a step in the right direction but it's also getting hacky.
Another idea is to do this in fixed-point 32-bit, where calculations may somehow be more stable?