Open arvoelke opened 7 years ago
Merging #119 into master will decrease coverage by
0.3%
. The diff coverage is60%
.
@@ Coverage Diff @@
## master #119 +/- ##
==========================================
- Coverage 100% 99.69% -0.31%
==========================================
Files 28 28
Lines 1321 1331 +10
Branches 155 156 +1
==========================================
+ Hits 1321 1327 +6
- Misses 0 3 +3
- Partials 0 1 +1
Impacted Files | Coverage Δ | |
---|---|---|
nengolib/signal/lyapunov.py | 95.91% <60%> (-4.09%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update c20ac96...e73fa47. Read the comment docs.
This may not be the correct solution. On a different machine, higher-order (e.g., 60) Pade delays have Gramiam matrices with extremely large negative eigenvalues (e.g., -1e6). These numerical issues require a more careful examination. In the mean time throwing a numerical exception seems reasonable (although admittedly not user-friendly).
WIP: Investigating possible differences between home machine and lab machine:
>>> from nengolib.signal import balance
>>> from nengolib.synapses import PadeDelay
>>> balance(PadeDelay(0.05, 6)).ss
(array([[-1.51916400e-01, -5.88016946e+01, 1.18643170e+00,
-1.08481343e+01, -3.08440717e+00, -7.95142204e+00],
[ 5.88016946e+01, -1.89454168e+00, 8.96232811e+01,
-7.10610958e+00, -2.63555210e+01, -2.17646585e+01],
[ 1.18643170e+00, -8.96232811e+01, -9.29476859e+00,
1.21324480e+02, 2.46707071e+01, 6.69591891e+01],
[ 1.08481343e+01, -7.10610958e+00, -1.21324480e+02,
-2.75094723e+01, -1.71953416e+02, -9.43855073e+01],
[-3.08440717e+00, 2.63555210e+01, 2.46707071e+01,
1.71953416e+02, -7.62229722e+01, -3.57787709e+02],
[ 7.95142204e+00, -2.17646585e+01, -6.69591891e+01,
-9.43855073e+01, 3.57787709e+02, -6.04926329e+02]]), array([[ 0.5508265 ],
[ -1.9275352 ],
[ -4.0742004 ],
[ -6.14595109],
[ 7.86253908],
[-12.53929554]]), array([[ 0.5508265 , 1.9275352 , -4.0742004 , 6.14595109, 7.86253908,
12.53929554]]), array([[0.]]))
$ conda env export
name: py36
channels:
- defaults
dependencies:
- backcall=0.1.0=py36_0
- blas=1.0=mkl
- bleach=3.0.2=py36_0
- ca-certificates=2018.03.07=0
- certifi=2018.10.15=py36_0
- cffi=1.11.5=py36he75722e_1
- cudatoolkit=9.0=h13b8566_0
- cudnn=7.1.2=cuda9.0_0
- cvxopt=1.2.0=py36h9e0dedd_0
- cycler=0.10.0=py36_0
- dbus=1.13.2=h714fa37_1
- decorator=4.3.0=py36_0
- entrypoints=0.2.3=py36_2
- expat=2.2.6=he6710b0_0
- fontconfig=2.13.0=h9420a91_0
- freetype=2.9.1=h8a8886c_1
- glib=2.56.2=hd408876_0
- glpk=4.65=h3ceedfd_2
- gmp=6.1.2=h6c8ec71_1
- gsl=2.4=h14c3975_4
- gst-plugins-base=1.14.0=hbbd80ab_1
- gstreamer=1.14.0=hb453b48_1
- icu=58.2=h9c2bf20_1
- intel-openmp=2019.1=144
- ipykernel=5.1.0=py36h39e3cac_0
- ipython=7.1.1=py36h39e3cac_0
- ipython_genutils=0.2.0=py36_0
- ipywidgets=7.4.2=py36_0
- jedi=0.13.1=py36_0
- jinja2=2.10=py36_0
- jpeg=9b=h024ee3a_2
- jsonschema=2.6.0=py36_0
- jupyter=1.0.0=py36_7
- jupyter_client=5.2.3=py36_0
- jupyter_console=6.0.0=py36_0
- jupyter_core=4.4.0=py36_0
- kiwisolver=1.0.1=py36hf484d3e_0
- libedit=3.1.20170329=h6b74fdf_2
- libffi=3.2.1=hd88cf55_4
- libgcc=7.2.0=h69d50b8_2
- libgcc-ng=8.2.0=hdf63c60_1
- libgfortran-ng=7.3.0=hdf63c60_0
- libpng=1.6.35=hbc83047_0
- libsodium=1.0.16=h1bed415_0
- libstdcxx-ng=8.2.0=hdf63c60_1
- libuuid=1.0.3=h1bed415_2
- libxcb=1.13=h1bed415_1
- libxml2=2.9.8=h26e45fe_1
- markupsafe=1.1.0=py36h7b6447c_0
- matplotlib=3.0.1=py36h5429711_0
- metis=5.1.0=hf484d3e_4
- mistune=0.8.4=py36h7b6447c_0
- mkl=2018.0.3=1
- mkl_fft=1.0.6=py36h7dd41cf_0
- mkl_random=1.0.1=py36h4414c95_1
- nbconvert=5.3.1=py36_0
- nbformat=4.4.0=py36_0
- nccl=1.3.5=cuda9.0_0
- ncurses=6.1=hf484d3e_0
- ninja=1.8.2=py36h6bb024c_1
- notebook=5.7.2=py36_0
- numpy=1.15.4=py36h1d66e8a_0
- numpy-base=1.15.4=py36h81de0dd_0
- openssl=1.0.2p=h14c3975_0
- pandas=0.23.4=py36h04863e7_0
- pandoc=2.2.3.2=0
- pandocfilters=1.4.2=py36_1
- parso=0.3.1=py36_0
- patsy=0.5.1=py36_0
- pcre=8.42=h439df22_0
- pexpect=4.6.0=py36_0
- pickleshare=0.7.5=py36_0
- pip=18.1=py36_0
- prometheus_client=0.4.2=py36_0
- prompt_toolkit=2.0.7=py36_0
- ptyprocess=0.6.0=py36_0
- pycparser=2.19=py36_0
- pygments=2.2.0=py36_0
- pyparsing=2.3.0=py36_0
- pyqt=5.9.2=py36h05f1152_2
- python=3.6.6=h6e4f718_2
- python-dateutil=2.7.5=py36_0
- pytorch=0.4.1=py36ha74772b_0
- pytz=2018.7=py36_0
- pyzmq=17.1.2=py36h14c3975_0
- qt=5.9.6=h8703b6f_2
- qtconsole=4.4.2=py36_0
- readline=7.0=h7b6447c_5
- scipy=1.1.0=py36hfa4b5c9_1
- seaborn=0.9.0=py36_0
- send2trash=1.5.0=py36_0
- setuptools=40.5.0=py36_0
- sip=4.19.8=py36hf484d3e_0
- six=1.11.0=py36_1
- sqlite=3.25.3=h7b6447c_0
- statsmodels=0.9.0=py36h035aef0_0
- suitesparse=5.2.0=h171a5a3_0
- tbb=2018.0.5=h6bb024c_0
- terminado=0.8.1=py36_1
- testpath=0.4.2=py36_0
- tk=8.6.8=hbc83047_0
- tornado=5.1.1=py36h7b6447c_0
- traitlets=4.3.2=py36_0
- wcwidth=0.1.7=py36_0
- webencodings=0.5.1=py36_1
- wheel=0.32.2=py36_0
- widgetsnbextension=3.4.2=py36_0
- xz=5.2.4=h14c3975_4
- zeromq=4.2.5=hf484d3e_1
- zlib=1.2.11=ha838bed_2
- pip:
- atomicwrites==1.2.1
- attrs==18.2.0
- connplotter==0.7a0
- cvxpy==1.0.10
- dill==0.2.8.2
- ecos==2.0.7.post1
- fastcache==1.0.2
- flake8==3.6.0
- future==0.17.1
- llvmlite==0.25.0
- mccabe==0.6.1
- metric-representation==0.1
- more-itertools==4.3.0
- multiprocess==0.70.6.1
- nengo==2.8.1.dev0
- nengo-spa==0.6.1.dev1
- nengolib==0.4.3.dev0
- numba==0.40.1
- osqp==0.4.1
- pluggy==0.8.0
- py==1.7.0
- pycodestyle==2.4.0
- pyflakes==2.0.0
- pynest==2.14.0
- pytest==4.0.0
- scikit-learn==0.20.1
- scs==2.0.2
- toolz==0.9.0
- topology==2.14.0
- torch==0.4.1
prefix: /home/arvoelke/anaconda3/envs/py36
If the matrices are the same then the issue might be in the solver.
Matrices are the same (within numerical epsilon) on my lab machine, but still getting different results. Need to check solver results somehow, or something else entirely... Using same version of numpy
and scipy
on both machines. :S
This might be avoided altogether by the use of the legendre_delay
realization. Need to verify that it can be balanced at high # of dimensions.
Needs unit test.
For whatever reason switching my BLAS had the side-effect of introducing rounding errors into the observability Gramiam which makes the Cholesky decomposition raise
LinAlgError: *-th leading minor not positive definite
. Specifically this occurred when balancing a 27-dimensional PadeDelay system.The approach taken here is to perturb the eigenvalues by the smallest possible value such that they all become positive. I've arbitrarily chosen
1e-19 - eig
as the largest possible regularization, whereeig
is the smallest eigenvalue that is assumed to be no smaller than-1e-9
. The logic is that if the smallest eigenvalue is any smaller than this, then something must be very wrong (and is then made to fail here). The1e-19
is an epsilon needed to make the eigenvalues positive (rather than non-negative). These constants may need to be tweaked or exposed. Tested manually.