Open dsharlet opened 3 years ago
I just switched from the trapezoid method to BDF2 for numerical integration, which I just realized might help quite a bit with this. It cuts down a lot on the number of places the various functions that come out of analysis need to be evaluated, which is the main problem with dynamic parameters. Only reducing it by a factor of (strictly less than) 2 isn't going to solve it by itself, but maybe it will help...
I've made some progress in this matter š
You can check a demo here
I've basically used your approach from b9f03ca which turned out to be very slow, as expected, but it was working fine for very small circuits, except that something was wrong (simulation was sometimes working, but potentiometers were amplifying signal š). It turned out to be a bug in Factor
function in ComputerAlgebra\Extensions\Factor.cs
which is sometimes randomly removing variables. Anyway, I commented out factorization and it's working š And it's working faster than before - I suspect because the generated expressions are very repetitive and our current caching compiler handles it quite well. I'll test that with my decompilation trick later to confirm. There are some things that could be improved, as currently I'm updating the variables once per process
function call, but you can extract all sub-expressions that depends only on those variables and calculate them only when parameter changes.
Wow, the demo results are incredible!! Very nice work.
The circuit you are simulating is even a reasonably complex circuit! I'm honestly shocked at how well this is working.
It's totally reasonable to remove Factor, aside from the bug, it makes sense that the compiler can do CSE better without it.
Have you found any circuits that perform worse with this? Is the solve time reasonable? But, really, since this change means that adjusting pots doesn't require solving the circuit again, even if it took multiple seconds, a long solve time would be OK!
This is a really great result, awesome work. I feel silly for not figuring this out when I was working on it :) Also, nice work on reviving the old branch, I remember attempting to revive this approach and getting bogged down on something. I think we should do a bit more testing and consider merging your POC, even if simulation performance regresses in some cases. This is a really great improvement, I really miss this approach where pots could be moved during simulation without reinitializing the simulation.
I see some seemingly spurious changes in your POC, e.g. swapping resistances and solving order for potentiometers - was this change necessary to get this working?
@mikeoliphant we had looked at this a while back. @Federerer has made some real progress on getting this working here :)
Having to re-solve on parameter changes is a major limitation, so this would clearly be a big win.
If I recall correctly, I added some code in to delay parameter changing in the VST to keep it from updating the simulation too often. That could probably be removed if this fix is made to allow immediate continuous control.
This would also make it more practical to consider exposing parameters from the VST to the DAW (they currently are not) to allow for automation. The issue still remains, though, that the parameters change when you load a new circuit - which creates ugly problems with the DAW integration.
Currently, when you change a potentiometer or other circuit control, the simulation has to be resolved, which takes a while, and loses the state of the circuit.
A while back, this worked differently: the potentiometers and other parameters were actual parameters of the simulation function, and could be adjusted smoothly and dynamically. This was a much better user experience and would make circuits like Wah pedals a lot more useful and interesting.
However, this approach really struggled, because the non-constant expressions would get gigantic during row reduction. I ended up dropping this approach in favor of the current simulation rebuilding to avoid this: https://github.com/dsharlet/LiveSPICE/commit/b9f03cab6475207496ada502aa58fda407467e51
There are two row reductions that currently happen:
(Background: http://dsharlet.com/2014/03/28/how-livespice-works-numerically-solving-circuit-equations/)
I think (1) is strictly necessary. However, I think (2) is (partially) not necessary, and even a bad idea sometimes. I think maybe we only need to partially row reduce to get the number of variables down to the number of non-linear equations. Currently, the whole system is row reduced such that after solving the non-linear system, there is only back substitution to do. But this might not be necessary, solving a linear system dynamically isn't bad (and could benefit a lot from SIMD), and might even have other benefits (we could choose pivots, which isn't possible symbolically).
The problem here is I'm not actually sure how to do this. It's not possible to simply row reduce anywhere we'd like, we need a pivot in every column we want to zero out, and you can't get that pivot by zeroing out every column before that.
We could just take the linear variables as the values from the previous timestep, solve the non-linear equations, and then get the new linear equations to solve. That would basically be a block-wise (with 2 blocks) Gauss-Seidel type thing. It seems a lot of hobbyist circuit simulators do something like this (usually without realizing it). I've generally tried to avoid Gauss-Seidel as it just seems sketchy, and should require more iterations (linear vs. quadratic convergence). But maybe this doesn't really matter in practice.
I think "real" large SPICE simulators use Gauss-Seidel a lot more, which would make sense (large sparse systems are going to be bad with Newton's method). But they're also a lot slower...
It's also possible that (1) by itself is enough to make dynamic controls infeasible.