dsharlet / LiveSPICE

Real time SPICE simulation for audio signals
MIT License
419 stars 61 forks source link

TransientSolution.Solve sometimes produces invalid solutions. #177

Closed Federerer closed 1 year ago

Federerer commented 1 year ago

Today I've learned that you have to be very careful when calculating a time step, because it's implicitly converted to the BigRational. Debugger will show you a nice looking double like 2,0833333333333333E-05 but under the hood it could be 1/48000 or 6148914691236517/295147905179352825856 😒 This shouldn't make a difference, and for some circuits that's the case - just a small differences in generated numbers: image but somehow, for some circuits I'm getting this: image

That could also explain why, in some cases a circuit performs just fine without any oversampling and I can't get it to run with oversampling applied, which is generally counterintuitive, as the smaller time step should make things better, not worse 😉

dsharlet commented 1 year ago

I expect that using doubles can sometimes produce slower computer algebra (because of rationals getting huge), but it's surprising to see NaN instead of a slightly different number. This seems like a bug.

In general though, it's pretty helpful to have sampling rates (or their inverses) be integers (or at least rationals).

dsharlet commented 1 year ago

I've been looking into #170 and I think it may be the same as this issue: when BigRational has numerators/denominators that get so big they can't be converted to doubles, the result is NaN. I'm working on a more reliable conversion to double now, which might fix both of these problems.

dsharlet commented 1 year ago

I think the above commit very likely fixes this issue. It makes perfect sense, because by using a double with a near-irrational value for the timestep will make the rationals a lot bigger, and more likely to overflow a double. By handling this case, the issue should be fixed. Please confirm if this is the case for you.

BTW, I wonder if this will have a big impact on some of the other experiments you were doing. It makes sense that reordering the components would make certain rationals bigger or smaller, that may have been working around this issue too!

Federerer commented 1 year ago

After rebasing, everything seems to work OK, except one circuit (which was always problematic) But doing some git bisect points to https://github.com/dsharlet/ComputerAlgebra/commit/28bc03b90fbbbddcbeef1c5558ab54ca8e9fd158 being a problem, so this is mostly related to generating slightly modified expression for the same circuit. After trying some permutations I've again found a working one. In some cases solve took really long time (minutes). Interestingly, I've found, that fast solve time usually gives a stable, short and performant solution, and if solve took long time it's quite the opposite ;) I really have to take a look at the pivoting algorithm to compare what happens there when solving 'good' and 'bad' permutation, because maybe we can do something clever when choosing between some technically equivalent pivots there, so we can keep the expression sizes reasonable, as we discussed in https://github.com/dsharlet/LiveSPICE/discussions/173

dsharlet commented 1 year ago

There's a pretty easy tie breaker that implements that idea: choose the pivot with the fewest number of non-zero entries when adding it to the remaining rows. I already did something like that in full pivoting, we can do the same thing in partial pivoting I think. I'm going to try it now.

dsharlet commented 1 year ago

I've pushed a branch that pulls in a refactor to pivoting in ComputerAlgebra. At the very least, it should be easier to experiment with than before. I haven't been able to find a pivoting tie-breaker that is consistently good, though I have had some surprising wins (e.g. MXR Phase 90 solves 10% faster and simulates 30% faster). I'm also not sure there aren't bugs in this code yet.

Federerer commented 1 year ago

I've run a quick test and it's quite cpu-heavy, I must say. For the more complicated circuits I didn't noticed better success rate than before. I really have to create you some circuit permutations examples for A/B testing, maybe it'll help finding the issue.

dsharlet commented 1 year ago

Can you clarify what you mean by CPU heavy? On that branch, I'm seeing a geomean change of +2% solve time, and a +4% sim speed. So, a slight improvement in simulation speed at the cost of a slight regression in solve time.

edit: Also, it seems like it tends to be the bigger circuits that improve sim speed the most, while the circuits that were already super fast (10x+ real time) get slower.

dsharlet commented 1 year ago

Also, should we mark this particular bug fixed, because of the huge rational issue? This pivoting issue seems separate.

dsharlet commented 1 year ago

There's a discussion on #130 for pivot selection improvements now

dsharlet commented 1 year ago

I think we should consider the original issue here fixed :)