Open stanleybak opened 11 months ago
It seems my system has some sort of python library mismatch issue. It seems like initialize
is trying to run python code: [kfModel, trainset] = initialize(kfModel);
(how did you do this)
>> falsifyVanderpol
Error using ewm><module>
Python Error: ImportError: /usr/local/MATLAB/R2023b/bin/glnxa64/../../sys/os/glnxa64/libstdc++.so.6: version `GLIBCXX_3.4.29' not
found (required by
/home/stan/anaconda3/lib/python3.9/site-packages/pandas/_libs/window/aggregations.cpython-39-x86_64-linux-gnu.so)
Error in __init__><module> (line 1)
Error in generic><module> (line 152)
Error in frame><module> (line 157)
Error in generic><module> (line 70)
Error in __init__><module> (line 1)
Error in api><module> (line 48)
Error in __init__><module> (line 48)
Error in trajectory><module> (line 4)
Error in autokoopman><module> (line 9)
Error in __init__><module> (line 24)
Error in <frozen importlib>_call_with_frames_removed (line 228)
Error in <frozen importlib>exec_module (line 850)
Error in <frozen importlib>_load_unlocked (line 680)
Error in <frozen importlib>_find_and_load_unlocked (line 986)
Error in <frozen importlib>_find_and_load (line 1007)
Error in <frozen importlib>_gcd_import (line 1030)
Error in __init__>import_module (line 127)
Error in coreFalsify (line 5)
[kfModel, trainset] = initialize(kfModel);
Error in KF/falsify (line 185)
[obj,trainset] = coreFalsify(obj);
Error in falsifyVanderpol (line 6)
[kfModel,trainset] = falsify(kfModel);
To fix the ImportError
issue I did as follows:
export LD_PRELOAD=/lib/x86_64-linux-gnu/libstdc++.so.6
matlab
TODO: need to remove external/CORA from path
To fix the rendering issue when I use the LD_PRELOAD
command, I needed to run matlab with matlab -softwareopengl
to force it to use software rendering.
Also: remove breach/ext
from path
=== Model Load (Elapsed: 0.53 sec) ===
Warning:sldemo_autotrans_data is not found in the current folder or on the MATLAB path, but exists in:
/home/stan/Documents/MATLAB/Examples/R2023b/simulink_automotive/AnalyzeDataFromMultipleSimulationsExample
Change the MATLAB current folder or add its folder to the MATLAB path.
Warning:Workspace for block diagram 'Autotrans_shift' was not loaded because an error occurred while loading MATLAB code: 'load sldemo_autotrans_data; %load initial conditions and other necessary data'
Warning:Could not evaluate MaskDisplay commands of block 'Autotrans_shift/Vehicle': File "/usr/local/MATLAB/R2023b/toolbox/simulink/simdemos/automotive/html/NSX_3Colors.jpg" does not exist.
Warning:Could not evaluate MaskDisplay commands of block 'Autotrans_shift/Vehicle': File "/usr/local/MATLAB/R2023b/toolbox/simulink/simdemos/automotive/html/NSX_3Colors.jpg" does not exist.
Solution: add sldemo_autotrans_data.mat
to the simulink folder near the mdl file
Warning:Cannot find library called 'neural'. To use this library, install Deep Learning Toolbox.
kfModel: @modelCars
--------------------------------------------------------
running falsify
initialize
Error using assert
Currently only an input interpolation of "previous" is supported for a number of control points less than
T/ak.dt
Error in KF/initialize>setCpBool (line 138)
assert(strcmp(obj.inputInterpolation,'previous'),'Currently only an input interpolation of "previous" is supported for a number of control points less than T/ak.dt')
Error in KF/initialize (line 103)
obj=setCpBool(obj);
Error in KF/falsify (line 39)
[obj, trainset] = initialize(obj);
Error in repArch22 (line 109)
[kfModel,~] = falsify(kfModel);
For this, I had:
num cp=100, T = 100.000000, ak.dt = 10.000000
Error using assert
Currently only an input interpolation of "previous" is supported for a number of control points less than
T/ak.dt
For this system, since there are two inputs obj.cpBool is a 2-column matrix, when k == 1 (first iteration) it gets set to:
1 0
1 0
1 0
1 0
1 0
1 0
1 0
1 0
1 0
1 0
Which is not all 1s. The solution is to update the asset to just look at column k (use all(obj.cpBool(:,k))
, rather than ~all(obj.cpBool,'all')
). Not sure how this system managed to run on your machine (maybe it was a recent update).
This is with the NN model:
Error using STL_Formula>parenthesisly_balanced_split
STL_Parse: expression alw_[1,37](((not (abs(x1[t]- u1[t] - 0.03*abs(u1[t] + 0.005 > 0)) or ev_[0,2](alw_[0,1](not (0.03*abs(u1[t]
+ 0.005 - abs(x1[t]- u1[t] <= 0))))):Too many (4) opening parenthesis in expr
I think this one comes from coraBreachConvert.m
. The string at the start looks right, but after all the replacements the parenthesis are mismatched:
start stl string: G[1,37](((~(abs(x1 - u1) - 0.03*abs(u1) + 0.005 > 0)) | F[0,2](G[0,1](~(0.03*abs(u1) + 0.005 - abs(x1 - u1) <= 0)))))
end stl string: alw_[1,37](((not (abs(x1[t]- u1[t] - 0.03*abs(u1[t] + 0.005 > 0)) or ev_[0,2](alw_[0,1](not (0.03*abs(u1[t] + 0.005 - abs(x1[t]- u1[t] <= 0)))))
The problem is in the variable replacement loop, where for example they replace u1 by u1[t]:
init: alw_[1,37](((not (abs(x1 - u1) - 0.03*abs(u1) + 0.005 > 0)) or ev_[0,2](alw_[0,1](not (0.03*abs(u1) + 0.005 - abs(x1 - u1) <= 0)))))
Replacing '(\w*)u1([^0-9]\w*)'
with 'u1[t]'
and got: alw_[1,37](((not (abs(x1 - u1[t] - 0.03*abs(u1[t] + 0.005 > 0)) or ev_[0,2](alw_[0,1](not (0.03*abs(u1[t] + 0.005 - abs(x1 - u1[t] <= 0)))))
The regular expression replaces u1)
instead of just u1
.
The issue is the [^0-9]
part consumes one character in the input string, look at the following trace in AT ,which doesn't really cause a problem since it's consuming the space:
stl string: alw_[0,100](x5 - x4 <= 40)
Replacing '(\w*)x1([^0-9]\w*)' with 'x1[t]' and got: alw_[0,100](x5 - x4 <= 40)
Replacing '(\w*)x2([^0-9]\w*)' with 'x2[t]' and got: alw_[0,100](x5 - x4 <= 40)
Replacing '(\w*)x3([^0-9]\w*)' with 'x3[t]' and got: alw_[0,100](x5 - x4 <= 40)
Replacing '(\w*)x4([^0-9]\w*)' with 'x4[t]' and got: alw_[0,100](x5 - x4[t]<= 40)
Replacing '(\w*)x5([^0-9]\w*)' with 'x5[t]' and got: alw_[0,100](x5[t]- x4[t]<= 40)
I'm not sure why (\w*)
was in the old regexp... this would match strings like prefix_x1_suffix
. I suggest to replace it as follows, which uses a lookahead to make sure x1 matches by x10 doesn't, without replacing the character
for k=1:size(cora_stl.variables)
old = strcat(cora_stl.variables{k},'(?![0-9])');
new = strcat(cora_stl.variables{k},'[t]');
stl = regexprep(stl,old,new);
end
here's a chatgpt explanation of the syntax:
In the regular expression (?![0-9])
, the ?
, !
, and [0-9]
components serve specific functions:
(?![0-9]): This entire construct is known as a negative lookahead. Let's break it down:
(?...): The parentheses ()
with a question mark ?
right after the opening parenthesis indicate a lookahead or lookbehind assertion in regex syntax. These are types of zero-width assertions, meaning they don't consume any characters on the string; they just assert whether a match is possible or not.
!: Inside the lookahead, the exclamation mark !
signifies that it is a negative lookahead. A negative lookahead asserts that whatever pattern follows it must not exist at the current position in the string. In other words, it looks ahead to check that the specified pattern is not there, but doesn't consume any of the string in doing so.
[0-9]: This is a character class that matches any single digit from 0 to 9.
So, (?![0-9])
in a regex pattern asserts that at the current position in the string, the next character cannot be a digit from 0 to 9. This is used to ensure that a particular pattern is only matched if it's not immediately followed by another specific pattern—in this case, a digit.
Last two issues were due to recent updates, thanks, will amend with the updated solutions! Makes a good case for a test suite as well for future updates
Not really a bug, just curious, in getDispSampleXU
are you randomly perturbing the optimal solution by 1%? I didn't see this in the paper.
In falsify.m
, line 152:
obj.soln.falsified = ~isreal(sqrt(robustness)); %sqrt of -ve values are imaginary
Why not just check if robustness < 0 ?
The default rank search is weird in KF.m
: obj.ak.rank=[1,20,4];
. This tries ranks of 1, 5, 9, 13, and 17. It seemed the intention was to go up to 20.
Not really a bug, just curious, in
getDispSampleXU
are you randomly perturbing the optimal solution by 1%? I didn't see this in the paper.
This was an experiment early on to train only with trajectories that improve the model. The experiments in the paper do not use this technique. The idea was to perturb the current trajectory, add to the trainset and compute robustness with the new model, if robustness if improved, the trajectory is permanently added, if not, it is removed and another perturbed trajectory is generated. This setting can be activated by setting kf.trainRand==2 (see KF class file for more info). This approach can be improved by leveraging search-based optimization techniques and I plan to look into it.
In
falsify.m
, line 152:
obj.soln.falsified = ~isreal(sqrt(robustness)); %sqrt of -ve values are imaginary
Why not just check if robustness < 0 ?
That was just my quick solution to have it one line ;). For readability and performance it should be change to robustness < 0. In fact, I want to rewrite the falsify function for better readability and less code duplication.
The default rank search is weird in
KF.m
:obj.ak.rank=[1,20,4];
. This tries ranks of 1, 5, 9, 13, and 17. It seemed the intention was to go up to 20.
Interestingly, for the experiments, this configuration worked best. Possibly the rank values examined resulted in a good model. In fact, for our Koopman paper, we had a similar rank setup of [1,200,40], which also worked best. I also noticed that a smaller step resulted in a worse result as the for low simulations, the tuner typically chose a very small rank (2-3) which was a bad approximation of the model. This issue should be pinned to be investigated.
Not an error, but an observation:
Without normalization the learned model is very heavily biased towards minimizing errors in whichever variable has the largest scale. For example, in AT the RPM is in the thousands. The koopman observables are all small since they are cosines of different frequencies / phases multiplied by sqrt(1/nObs), and are therefore in the range [-sqrt(1/nObs), sqrt(1/nObs)], which is always smaller than [-1, 1]. Thus, the squared error is dominated almost entirely by the RPM variable. This is both in the regression in eDMD, as well as in the hyperparmeter turning score function in autokoopman.
Also: in the earlier RFF paper we have sqrt(2) in front of the observables... but the AT code we do sqrt(1/nObs). Further, this factor seems to be in conflict with the sentence "Note that we can omit the constant factor m since extended dynamic mode decomposition will automatically scale the observables accordingly." @EthanJamesLew is this still correct if eDMD includes the original state variables such as the non-scaled RPM? After all, it's minimizing the sum of squared errors.
What is interesting is that without normalization, Autokoopman learns the model quite well (even for the large difference in scales). See the figure below for training with 1 trajectory.
Normalization leads to highly irregular behavior, mainly when the number of training trajectories is low. See the figure below for training with 1 trajectory (learnt trajectory is barely visible on the left side of the figure).
In the code, we use sqrt(2/nObs), as it is set in the Autokoopman toolbox. The reason it may look like sqrt(1/nObs) is because we typically use 20 observables and internally the function converts sqrt(2/20) to sqrt(1/10). See learnKoopModel, line 55
This may be because you're plotting 2 of the 22 variables and so the error looks different on the other ones. I want to look at it closer.
How did you create these images? The AT plot looks different than the first simulation I get.
Can be recreate using falsifyTransmission.m
Note that the images might look slightly different due to repeatability issue on different devices, but still should be similar
I'm using this issue to document some install issues:
In
setupKF
it looks like it's using an old version of the install script (line 5):filePath = which('installKF');
This should be updated to correct name
setupKF
.