OpenFAST / openfast

Main repository for the NREL-supported OpenFAST whole-turbine and FAST.Farm wind farm simulation codes.
http://openfast.readthedocs.io
Apache License 2.0
687 stars 457 forks source link

Problems – Regression Tests #327

Closed bmazetto closed 4 years ago

bmazetto commented 5 years ago

Hello everyone, I’m experiencing some difficulties to pass all the available regression tests. I’m always getting the following results:

The following tests FAILED: 9 - UAE_Dnwind_YRamp_WSt (Failed) 16 - SWRT_YFree_VS_WTurb (Failed) 19 - 5MW_OC3Trpd_DLL_WSt_WavesReg (Failed) 20 - 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth (Failed) 21 - 5MW_ITIBarge_DLL_WTurb_WavesIrr (Failed) 23 - 5MW_OC3Spar_DLL_WTurb_WavesIrr (Failed) Errors while running CTest

I’m using:

I’ve already tried some of the suggestions in issue #274, such as:

Unfortunately, none of the above helped with the problems. Has someone else experienced the same issues? What could I do to overcome these failures?

I’ve also compared the results with the baseline. I’ve found out that:

  1. UAE_Dnwind_YRamp_WSt -> Local and baseline results are very similar.
  2. SWRT_YFree_VS_WTurb -> Slightly different results. The most relevant are:

image

image

  1. 5MW_OC3Trpd_DLL_WSt_WavesReg -> Local and baseline results are very similar.

  2. 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth -> There were some important differences: image image

  3. 5MW_ITIBarge_DLL_WTurb_WavesIrr -> Many different results:

image image image

  1. 5MW_OC3Spar_DLL_WTurb_WavesIrr -> Many different results:

image image image

andrew-platt commented 5 years ago

The difference in the Wave1Elev trace for the 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth case is suggestive of a difference in the random number generator used. If this is true, it might also account for the differences in the other irregular wave cases (21, 23). @rafmudaf, have you see issues in matching the random number generators with cygwin?

rafmudaf commented 5 years ago

Yes @andrew-platt I think thats spot on. We have an open issue, https://github.com/OpenFAST/openfast/issues/89, on that topic.

bmazetto commented 5 years ago

Thank you for the comments, @andrew-platt and @rafmudaf. After running the python script to print the results and compare it to the MACOS GNU baseline, I’ve found that: 1) UAE_Dnwind_YRamp_WSt -> Local and baseline results are very similar 2) SWRT_YFree_VS_WTurb -> Slightly different results. The most relevant are:

image

image

3) 5MW_OC3Trpd_DLL_WSt_WavesReg -> Local and baseline results are very similar 4) 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth -> Local and baseline results are very similar 5) 5MW_ITIBarge_DLL_WTurb_WavesIrr -> Local and baseline results are very similar 6) 5MW_OC3Spar_DLL_WTurb_WavesIrr -> Local and baseline results are very similar

bjonkman commented 5 years ago

The SWRT model has an instability, and I would not be concerned about any differences around that period of instability. In fact, I have noticed that almost any change to the code or compiler options will affect the results around 20 seconds.

Lyudyu commented 1 year ago

Dear all,

Sorry for disturbing, I’m asking for your expert opinion to understand/decide if I can actually consider acceptable the two “FAIL” reg-tests that I got and, therefore, proceed with the executable I compiled. Those two are: SWRT_YFree_VS_WTurb (the 5 highlighted parameters in the .html file are: TwrClrnc1, TwrClrnc2, TwrClrnc3, LSShftFxs and YawBrFzn) and 5MW_ITIBarge_DLL_WTurb_WavesIrr (there’s only 1 highlighted parameter in the .html file, the Azimuth, but that one actually seems quite acceptable to me, while there are others that differ far more).

According to what I’ve been reading on some other “issues” here on GitHub, it’s fairly common for those two to appear to fail, since they’re more sensitive to numerical roundoff. In particular, it seems that: 1) it’s common for the SWRT_YFree_VS_WTurb to fail, especially because of an instability around 20 sec; 2) the ones with “Irr” in their names usually have (or used to have, because it got fixed?) problems due to the random number generator used for irregular waves.

I’m using: Windows 11 - 64 bits, Visual Studio Community 2019 (v16.11.22), Intel(R) Fortran Compiler Classic 2021.8.0, OpenFAST-v3.3.0. Also, Anaconda3 (Anaconda v2022.10 ; Python v3.9.13.) for the final part of the procedure to execute the reg_tests.

In the zipped files I’m attaching, you can find the following files for both reg_tests: the .html , the .out , the .log , the excel with the plots I made.

Thank you so much for your time!

Lyudyu

SWRT_YFree_VS_WTurb.zip 5MW_ITIBarge_DLL_WTurb_WavesIrr.zip

jjonkman commented 1 year ago

Dear @Lyudyu,

Your results look fine to me. For the _5MW_ITIBarge_DLL_WTurbWavesIrr case, the difference is purely numerically, likely the result of differences in the compiler/settings. This case uses the RANLUX pseudo-random number generator to generate the irregular wave loads, which should not be different between compilers. I tend not to really look at the SWRT* model test results, because this model will not produce meaningful results until the furling and tail-fin aerodynamic capability is added back into OpenFAST.

Best regards,

Lyudyu commented 1 year ago

Dear @jjonkman,

Thank you so much for your help, I really appreciated!

There’s one detail regarding building the executable on Visual Studio that I noticed: following the instructions on the user manual, I selected x64, but then I clicked on the Configuration Manager (just to double check) and I found out that the check regarding the FAST_Registry was on Win32 rather than on x64. Is that normal, or are we supposed to make sure that also the FAST_Registry is x64 by manually selecting it?

Also, forgive me for bothering you with one more, probably obvious, question: since the reg_tests are meant for testing the “Release_Double” version and not the “Release” one, does passing those tests with the Release_Double version ensure that also the Release one (built subsequently form the same folder) will work accurately enough? Or are we supposed to perform the reg_tests with the Release version too?

Thank you very much again. Best regards,

Lyudyu

jjonkman commented 1 year ago

Dear @Lyudyu,

Here are my responses:

Best regards,

Lyudyu commented 1 year ago

Dear @jjonkman,

Thank you so much again for your precious help!

I’m really sorry for bothering you again. The executable I want to use is indeed the single precision "Release" version, so I ran the r-tests for it as you suggested. After reading your reply, I was expecting results worse than the “Release_Double” version, but I don’t know if this is acceptable or too much: this time I got 9 fails (2 of which actually “Failing to complete”, but their names end with _Linear and I remember reading on some other “issues” here on GitHub that they’re not real tests, so I guess I shouldn’t be worried about them) rather than just 2.

The above mentioned 9 (specifying the highlighted parameters that caused them to fail) are:

The ones I’m mostly concerned about are the 5MW_OC4Jckt_ExtPtfm (seems to have significant differences) and the HelicalWake_OLAF (seems to have many and significant differences).

In the zipped files I’m attaching, you can find the following files for all of the failed r-tests: the .html , the .out , the .log , the excel with the plots I made. Could you, when you have time, please take a look at them and let me know what you think?

(Hoping it may help deciding whether my results are acceptable or not, I point out that the simulations I’ll need to run will focus on floating offshore wind turbines).

Thank you very much for your time! Kind regards,

Lyudyu

5MW_ITIBarge_DLL_WTurb_WavesIrr.zip 5MW_Land_DLL_WTurb.zip 5MW_OC3Spar_DLL_WTurb_WavesIrr.zip 5MW_OC4Jckt_ExtPtfm.zip 5MW_TLP_DLL_WTurb_WavesIrr_WavesMulti.zip HelicalWake_OLAF.zip Ideal_Beam_Fixed_Free_Linear.zip Ideal_Beam_Free_Free_Linear.zip SWRT_YFree_VS_WTurb.zip

jjonkman commented 1 year ago

Dear @Lyudyu,

Just a few comments:

Best regards,

Lyudyu commented 1 year ago

Dear @jjonkman,

I’m really thankful for your help!

I followed your suggestion regarding the two IdealBeam…_Linear cases: I had to increase the fictitious length from 5e-5 to 7e-4 in order to avoid the “Failed to complete” scenario. With such change, both cases ran and got to complete, even if unfortunately they both failed the comparison. Since they didn’t generate any html file (which I guess it’s normal since none of those ending with _Linear did either) and, therefore, I didn’t have any hint on which parameter/s caused the comparison to fail, I compared the files on KDiff and found out that there are quite some differences. I’ll attach the two folders and I’d really appreciate your opinion on that when you have time.

By the way, about the other failed cases mentioned in my previous comment, I realized that you might be more familiar with looking at plots generated using the Python script provided, so I’ll also attach: the html file with plots for the HelicalWake_OLAF case, while only screenshots of the plots regarding the failing parameter/s for the other 6 cases (since I would have otherwise exceeded the 25 MB limit).

For what regards the HelicalWake_OLAF case, I’d be super thankful if @andrew-platt or @ebranlard could take a look at it when they have a moment.

One last thing that I forgot to ask/point-out last time and that I’ve been wondering if it may be the cause for some of those cases to fail: in the “3.2. Regression tests” chapter, the last step of the “3.2.4.4.1. Windows with Visual Studio regression test” section says to type “python manualRegressionTest.py ..\build\bin\openfast_x64_Double.exe Windows Intel 1e-5”, which doesn’t actually work because the Anaconda command prompt asks for a different syntax, which is “Executable-Name Relative-Tolerance Absolute-Tolerance”, so I modified the suggested command into “python manualRegressionTest.py ..\build\bin\openfast_x64.exe 1e-5 1e-5”, assuming Relative-Tolerance and Absolute-Tolerance to have the same value since I didn’t find any recommendation regarding such aspect. Is this last assumption correct, or shall I be using other values?

Thank you so much for your time and patience, Best regards,

Ideal_Beam_Fixed_Free_Linear.zip Ideal_Beam_Free_Free_Linear.zip Plots from Python.zip

ebranlard commented 1 year ago

Hi @bmazetto. I had a quick look at your OLAF results. I can see that there is a weird spike going on around 37s in the elastic loads. Apart from that, most signals, and in particular the aerodynamic signals appear to match fairly well. I wouldn't worry about it because you are using an older compiler. Try using gfortran 10.3 if you can, and compile in double precision.

Lyudyu commented 1 year ago

Dear @ebranlard,

Thank you so much for finding the time to have a look at my results.

Actually, the compiler I'm using is Intel(R) Fortran Compiler Classic 2021.8.0 (I'm not the same person who opened this issue, I just commented here a few days ago: about eight comments above, you can find the list of what I'm using).

I've already compiled in double precision and the OLAF case passed the test. Now I'm testing the single precision, since that's the one I'll need to use, and I'm trying to understand if the differences in results are acceptable.

Kind regards,

jjonkman commented 1 year ago

Dear @Lyudyu,

I'll let @ebranlard or @andrew-platt comment regarding the OLAF case in single precision. And I'll let @andrew-platt or @rafmudaf comment regarding the Python script syntax.

Regarding the IdealBeam* cases, I do see differences in the resulting .lin files between your single and the original double precision versions, but this is to be expected given the differences in numerical round-off between the two versions. Nothing stands out to me as a major difference. I'm not sure what you mean when you refer to "KDiff". That said, if you are concerned with the differences, I would suggest computing the eigensolution of the state matrix "A" stored in the resulting .lin files to ensure that the eigenfrequencies of the beams are consistent between the two sets of results. This would provide a physically meaning assessment of the differing results in the .lin files . The MATLAB and Python toolbox provide scripts for performing an eigenanalysis on the "A" matrices stored in .lin_ files

Best regards,

Lyudyu commented 1 year ago

Dear @jjonkman,

Sorry for the late reply and thank you so much for your detailed answer!

What you said about the absence of major differences is comforting, and thank you for suggesting a way to assess the physical meaning of the differing results. Regarding “KDiff”, with that I was actually referring to the software KDiff3 (I’m sorry, I should have been more clear).

If @ebranlard or @andrew-platt could comment on the OLAF case in single precision considering the details pointed out in my previous comment, that would be very helpful.

Also, I would be really thankful if @andrew-platt or @rafmudaf could comment regarding the Python script syntax.

Thank you all for your time!

Best regards,

ebranlard commented 1 year ago

Hi, I wouldn't worry about the difference between single and double precisions in the OLAF case at that time.

bjonkman commented 1 year ago

Regarding the python script syntax, The tolerances have different meanings after changes in https://github.com/OpenFAST/openfast/pull/1217, but the documentation hasn't been updated (see https://github.com/OpenFAST/openfast/issues/1225). Based on the defaults listed in #1217, this is what I have been using: python manualRegressionTest.py -p -v C:\openfast\build\bin\openfast_x64.exe 2 1.9

andrew-platt commented 1 year ago

There is a PR started to address the documentation on the manual regression tests: #1419 (a few more details will be added prior to merging).

Lyudyu commented 1 year ago

Dear @ebranlard, @bjonkman and @andrew-platt,

Thank you all so much for your help!!

Sorry for bothering you @bjonkman. I was starting to feel quite confident that my results were acceptable (meaning that the reasons for some of my tests to fail could be simply attributed to small differences between precisions and numerical roundoff), but then I tried running (in single precision) the r_tests using the updated tolerance values you mentioned (I also read #1217) and I got way more fails: 23 instead of my previous 9. Is that normal? Actually, 4 of those 23 are the ones stated to have been disabled in #1217 (specifically: 5MW_ITIBarge_DLL_WTurb_WavesIrr, 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth, SWRT_YFree_VS_WTurb and UAE_Dnwind_YRamp_WSt), so I guess I could/should neglect those 4 failed tests, right?

I attach the screenshots of the plots of the parameters that caused the remaining 19 tests to fail, and I would be really grateful if, when you have time, you could take a quick look at them and let me know if you think they are acceptable (especially: 5MW_OC4Jckt_ExtPtfm and HelicalWake_OLAF). (By the way, for each failed test I’ve added a txt file saying which parameters I’m more worried about).

In case it might be of any help, I will add that I also tried running the r_tests using the updated tolerance values in double precision and got 4 fails (the same 4 stated to have been disabled in #1217) instead of my previous 2 (only 2 out of those 4: 5MW_ITIBarge_DLL_WTurb_WavesIrr and SWRT_YFree_VS_WTurb).

Thank you very much for your time and patience!!

Best regards,

Failed r_tests plots.zip

Lyudyu commented 1 year ago

Also, in case I couldn’t/shouldn’t neglect “those” 4 failed tests, I attach the screenshots of their plots too, and the ones I’m most worried about are: 5MW_ITIBarge_DLL_WTurb_WavesIrr and 5MW_OC4Jckt_DLL_WTurb_WavesIrr_MGrowth.

Thank you so much again!

Best regards,

Failed r_tests plots_2.zip

Lyudyu commented 1 year ago

Dear @bjonkman and @jjonkman,

I am really sorry for bothering you again with this. I was just wondering if you (and @ebranlard when it comes to the OLAF case) could find a little time to take a look at my last two comments. I am back working on my thesis and having your expert opinion about what I asked above would be a huge help!

Thank you very much in advance!!

Kind regards,