Closed in3otd closed 8 years ago
Confirmed. DC analysis runs OK, but transient has problems. It may be connected with #34 and #41 .
in current
increasing the minimum time step to 1e-11 allows the simulation to finish, even if quite slower than 0.0.18
.
In current
the qucsator
messages say:
NOTIFY: TR1: average time-step 9.5057e-07, 123 rejections
NOTIFY: TR1: average NR-iterations 2.86312, 0 non-convergences
while in 0.0.18
is
NOTIFY: TR1: average time-step 9.77517e-07, 124 rejections
NOTIFY: TR1: average NR-iterations 2.73998, 0 non-convergences
so not too different, in average. Likely the time step algorithm in current
tries to reduce too much the time step at some point...
With
~/git/qucs/qucs-core $ git diff qucs-0.0.17..HEAD src/trsolver.* src/transient.* src/dcsolver.*
The major change I can see are these (on trsolver
):
nr_double_t LTEfactor = getPropertyDouble ("LTEfactor");
- nr_double_t dif, rel, tol, lte, q, n = NR_MAX;
+ nr_double_t dif, rel, tol, lte, q, n = std::numeric_limits<nr_double_t>::max();
int N = countNodes ();
int M = countVoltageSources ();
@@ -889,7 +897,7 @@ nr_double_t trsolver::checkDelta (void)
}
dif = x->get (r) - SOL(0)->get (r);
- if (finite (dif) && dif != 0)
+ if (std::isfinite (dif) && dif != 0)
Could it be that NR_MAX
and ::max
are different and affect the solver?
Edit: the changes are for trsolver
For me the 0.0.18 is also failing (OS X 64bit). Setting MinStep = 1e-11
(was 1e-16) the TR runs ok.
Are you on a 32 or 64 bit system?
I am getting this with 0.0.18, OSX 64 bit, MinStep = 1e-11
NOTIFY: DC1: creating node list for DC analysis
NOTIFY: DC1: solving DC netlist
NOTIFY: DC1: convergence reached after 24 iterations
NOTIFY: TR1: creating node list for initial DC analysis
NOTIFY: TR1: solving initial DC netlist
NOTIFY: TR1: creating node list for transient analysis
NOTIFY: TR1: solving transient netlist
NOTIFY: TR1: average time-step 1.65563e-06, 49 rejections
NOTIFY: TR1: average NR-iterations 2.61258, 0 non-convergences
The nasolver
also had some changes, but I for this case the convHelper
is set to none
. This code is not used in this example.
git diff qucs-0.0.17..HEAD src/nasolver.*
...32 bit system here, might partly explain it.
Strange is that here with MinStep = 1e-11
current
finishes but is quite slower than 0.0.18
.
I am currently run a script with git bisect run
to try to find the offending commit here.
...there is definitely something strange going on here...
For qucs-0.0.18
on my system:
CXXFLAGS="-O2 -march=i486 -mtune=i686" ./configure
(these were the options used to compile the system-wide install I use for 0.0.18
), it runs ok, as expected../configure --enable-maintainer-mode
it fails (this is how I compile the code from GitHub, usually).CXXFLAGS="-O2 -march=i486 -mtune=i686" ./configure --enable-maintainer-mode
it fails, probaby because -O0
is used actually.For current
, it fails with every configuration option above, on my system.
Doing a git bisect
using CXXFLAGS="-O2 -march=i486 -mtune=i686" ./configure
, git
says that
d695f3b20860e7aac984e76fd00102bd1eadc0d7 is the first bad commit, but likely this is not really meaningful, it probably just happens that some optimization is/is not performed with these code changes.
I'm afraid we should probably archive this as a typical weakness of the current transient code, for the moment...
I also always build and install with --enable-maintainer-mode
on. It is time to have a debug and release versions...
It is likely that the code is broken. We cannot afford to keep rolling with it. See gcc-optimizations-cause-app-to-fail for some further comments.
@in3otd , can you try to add more compiler checks to see if we can spot some issues like overflows, ranges, ... ?
Please post the relevant Configure Information
you get after running ./configure
. I will try to reproduce on my side.
Maybe we can walk the code on a debugger with -g -O0
and -g -O2
to see if we can spot any differences.
for qucs-core', running
./configure --enable-maintainer-mode` I get
Configure Information:
C Compiler : gcc
DEFS : -DHAVE_CONFIG_H
CPPFLAGS :
CFLAGS : -g -O2 -pipe -W -Wall -Wmissing-prototypes
C++ Compiler : clang++
DEFS : -DHAVE_CONFIG_H
CPPFLAGS :
CXXFLAGS : -g -O2 -std=gnu++11 -D__STRICT_ANSI__ -O0 -pipe -fno-exceptions -ldl -D__STRICT_ANSI__ -W -Wall
Linker : /usr/i486-slackware-linux/bin/ld
LDFLAGS :
LIBS : -lm
...but then there might be differences between the default options, like we have seen in MinGW recently...
I have done a small modification to the transient solver to fix this - there might be better way to handle this. Actually I feel that the entire time-step control of the transient solver should be modified, but this would be a much larger effort...
Has anyone tried to run the example again?
just tried again now :grin: as before, with the current master the transient simulation fails with a "Jacobian singular", with #202 it runs fine.
I kicked off Travis on PR 202 to see what happens.
Seems to be fixed after #202. Closing.
Here is a Qucs archive with a project using a LM358 spice model. The transient simulation runs fine in
0.0.18
but stops withJacobian singular at t = 6.091e-05
incurrent
, using the default Transient settings (Trapezoidal
integration). If theEuler
integration is selected, the simulation runs but very slowly. In 0.0.18, usingEuler
it is much faster.