Closed jayantmukho closed 4 years ago
Hi Jayant Thanks for being so prompt and fast in handling this very important issue. I’m currently in vacation at Scotland so I’ll be brief and just add few comments: a. As you wrote, validation is very different from test cases. Mesh, configuration and solutions should be optimized for accuracy, experiment reproduction and performance. b. I think it is important to start the validation project, create a link from the main SU2 web page to it, and than let it grow with the existing mechanism of community contributions with approvals. Your list is very fine for beginning. However I think we should strive to enrich it with 3D and time dependent cases.
I hope to be able to be in the loop and contribute to this important venue Best Eran On Wed, Sep 19, 2018 at 8:26 PM Jayant Mukhopadhaya < notifications@github.com> wrote:
Hey everyone,
Following the discussions at the SU2 Developers meeting this week, I wanted to start a conversation about compiling a comprehensive set of V&V cases for SU2 that can showcase it's performance in comparison to other solvers.
I think the the NASA TMR website https://turbmodels.larc.nasa.gov/index.html is a good model to base it on. The idea would be to present the V&V case, provide working configuration and mesh files, and provide results comparing performance to other solvers and to higher fidelity data (when available). This allows people to see the performance of SU2 and replicate it, if need be.
The first step would be compiling a list of cases that should be covered. The SU2 2014 SciTech paper https://su2code.github.io/documents/SU2_AIAA_SciTech2014.pdf would be a good starting point as it already had a couple of validation cases. This list can be bolstered with some of the NASA TMR cases, and with grid convergence studies. I would like to propose an initial list that the community can talk through and make changes as we see fit. I am mostly only familiar with canonical CFD flows that are used in these cases. But it would be great to have other cases, such as Turbo-machinary or FSI cases, that show the full breadth of SU2's abilities. This is by no means an exhaustive list:
- Zero Gradient Flat Plate
- 2D and 3D Bump in Channel
- Asymmetric Diffuser
- Backward facing step
- Unsteady Square cylinder
- NACA0012
- NACA4412 Trailing Edge Seperation
- Joukowski Airfoil
- 30P30N High Lift airfoil
- ONERAM6 Wing
- NASA CRM
- Subsonic and Supersonic Jets
It would be ideal for these test cases to have either high-fidelity data (wind tunnel tests, or LES/DNS data), and/or published results of other solvers so that comparisons can be made.
I also want to point out the difference between this and the TestCases repository. The TestCases repo is used in regression tests to ensure that parts of the code don't break when changes are made. This is more a test of SU2's fidelity. It would feature large grids and configuration files that can be run to convergence.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/su2code/SU2/issues/581, or mute the thread https://github.com/notifications/unsubscribe-auth/APKNm0MgSdkhQbG8dvGsdxlxIf9pa1xsks5ucppYgaJpZM4Ww0V4 .
Hi @jayantmukho great initiative indeed. I think we should coordinate the efforts between setting up "fast-running" tutorials (to get started with the code) and V&V cases. The list is definitely a good start - but same as Eran, I would also like to see some more unsteady cases there. Let's reach out to the community and get this going on. Thanks, Ruben
Just one thought on this. I think it would be nice to run the validation suite for every release, that would force us to keep the configurations up to date, and provide an extra level of quality assurance. Most of the cases will be too expensive for travis but hopefully the computational burden can be spread over the community. Cheers, Pedro
Thanks, Pedro, I think that's a great idea. As you mention, we would have to work out the logistics as this would require quite a bit of implication from the community, but the extra burden on the release would be compensated by avoiding big "updating" operations every now and then. Also as most users only work with the released versions, so it's a way to ensure they always know what updates have happened / which new features are available.
All,
Thanks for getting this going. My two cents:
Indeed, while these V&V cases and the TestCases directory / repo are not identical, there is some overlap…some of the TestCases are definitely validation cases too. We can continue to add test cases to the TestCases repo, knowing that only a subset of those cases belong in the V&V list.
An important aspect of the V&V is the convergence of the solution as the mesh is refined. As Jayant knows well, this can help us catch bugs. This means that a well-constructed V&V suite needs to include a series of meshes (of increasing density), the corresponding configuration files, and the actual experimental data (or other numerical data from runs on different solvers).
I strongly agree with the suggestions made that (a) the entire V&V suite needs to be run before every major release (with configuration files updated), (b) that this should be linked form the main SU2 page, (c) that the 2014 AIAA paper (and Tom’s AVIATION 2018 paper) should serve as a starting point, (d) that the NASA TMR website can give us ideas of additional access, and (e) that the SU2 V&V page should be managed within GitHub.comhttp://GitHub.com so the entire community can edit / add to this / these page/s to continue to grow the number of cases and their relevance.
Best,
Juan
On Sep 20, 2018, at 3:02 AM, Ruben Sanchez notifications@github.com<mailto:notifications@github.com> wrote:
Thanks, Pedro, I think that's a great idea. As you mention, we would have to work out the logistics as this would require quite a bit of implication from the community, but the extra burden on the release would be compensated by avoiding big "updating" operations every now and then. Also as most users only work with the released versions, so it's a way to ensure they always know what updates have happened / which new features are available.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/su2code/SU2/issues/581#issuecomment-423122949, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADpSxJzh9dwbOM_5DoymCA6lAIqv3biyks5uc2eqgaJpZM4Ww0V4.
All,
Regarding the steady test cases, Embraer folks published a paper of their efforts to validate SU2 using the DPW and HLPW geometries (https://doi.org/10.2514/6.2018-2845 https://doi.org/10.2514/6.2018-2845) in the last AVIATION. I think that they will be happy to contribute with the meshes and config files.
Also, I can contribute with some unsteady cases. I think that the Backward Facing Step and the Tandem Cylinder test cases are a good start.
Best,
Eduardo
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/su2code/SU2/issues/581#issuecomment-423207120, or mute the thread https://github.com/notifications/unsubscribe-auth/AJVmCdByjE1ogiSKp7fIos58GcRe-Rdlks5uc6e0gaJpZM4Ww0V4.
Hi All I’m a bit worried about the logistics related to checking this v&v data base for every new release. Unlike the tutorials, these cases, by their nature will be large and will require long integrations(the 2D cases might not fall on this category). This means also that significant computational resources will be required for this evaluation (about twice a year for a growing list of cases). Is it practical? The only way that I think that it might work is that each contributor will be responsible for checking the casesthat he has introduced, before each release. Being a voluntary institution, this can not be enforced (and we do not want to enforce). How about trying to be less demanding: In each validation case there will be a statement about the last version that it was checked with and the responsible contributor. Each contributor will receive a recommendation to check his cases before a new release, and will be able to do that and update the OK label also after the release. Less waterproof but might be more workable. What do you think? Eran
Hello I'm so glad to see that the su2 meeting has been so productive! To address Eran's concern, maybe it would be more reasonable to run v&v on major releases only, aka 7.0 but not 7.1, and I agree that keeping a record of the most recent version checked is a good idea-and as necessary minor releases could be tested, as suggested by individual developers.
The regression tests should ensure that the v&v results will be unlikely to be changed. One thing we can do to reduce that risk further would be to introduce regression tests that compare solution files rather than the terminal output alone. I can take a stab at that if there's no other volunteers- probably only needs to be a couple of critical tests, the file diff may be slightly more expensive than currently, but given the increase in test precision and detail I think it would be worth it.
H
On Thu, Sep 20, 2018, 1:46 PM erangit notifications@github.com wrote:
Hi All I’m a bit worried about the logistics related to checking this v&v data base for every new release. Unlike the tutorials, these cases, by their nature will be large and will require long integrations(the 2D cases might not fall on this category). This means also that significant computational resources will be required for this evaluation (about twice a year for a growing list of cases). Is it practical? The only way that I think that it might work is that each contributor will be responsible for checking the casesthat he has introduced, before each release. Being a voluntary institution, this can not be enforced (and we do not want to enforce). How about trying to be less demanding: In each validation case there will be a statement about the last version that it was checked with and the responsible contributor. Each contributor will receive a recommendation to check his cases before a new release, and will be able to do that and update the OK label also after the release. Less waterproof but might be more workable. What do you think? Eran
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/su2code/SU2/issues/581#issuecomment-423273117, or mute the thread https://github.com/notifications/unsubscribe-auth/AE7akEt5omvVYLjMzzdaTQ6jgPza0N4wks5uc9R-gaJpZM4Ww0V4 .
Dear all,
As I mentioned at the dev meeting earlier this week, regarding V&V, I already have joint plans with Eduardo Molina to look at a number of turbulent benchmark cases using his (E)DDES implementation (eg. square cylinder, tandem cylinders, massively separated NACA0012/21, etc). The next step after that would be to use my FWH implementation to compute farfield noise and compare them to the BANC workshop cases. NASA folks are very open to share the BANC data. We will likely be able to obtain the European VALIANT project data too. I am on a short vacation right now but will start iterating with Eduardo on this next week when I am back in the office.
Best,
Beckett
@hlkline That's a good point about the regression tests ensuring that the v&v results will be unlikely to change. If we have the same residuals for all the test cases, it isn't a stretch to say that the final results of the v&v cases will stay the same. But sometimes the test values for the regression tests are changed during development and then we lose the guarantee.
From the viewpoint of being rigorous, I agree with the suggestion to run the v&v cases before a major release like 7.0 and also with @erangit on keeping track of the last version that they were run for. I am unsure about holding the people who add the test cases responsible for re-running them. Circumstances change, and access to resources change. That may make it hard for people to run the v&v cases.
I also think it is a good idea to have the effort be collaborative so people can add cases. My concern is about size limits that github has on repositories. Some of the mesh files are going to be massive, especially given that we want to perform grid convergence studies. We will soon be over the size limit. I am not sure about how to get around that. Suggestions would be great!
@jayantmukho If the upload size limit on Github is an issue, and I suspect it likely will be for these large 3D meshes, we can follow the BANC model -- the researchers who conducted the corresponding V&V cases are named custodians of these test case with explicit understanding that data will be shared upon request. This way, the data resides with a particular SU2 dev group or groups and need not be uploaded. However, I think the config files and all other info necessary for other users to replicate the test cases must be uploaded.
Best, Beckett
@BeckettZhou I think that's a good idea. Just to make sure I understand the BANC model: for cases that have large mesh sizes, we provide configuration files, and visualization of the results (comparisons to higher fidelity data and/or to other solvers). If someone would like to run the case for themselves, they would contact the custodian and the custodian is obligated to provide the meshes.
The next step would be to start the github repo. @economon Is it possible to start the repo under the su2code umbrella? Is that something you could start and people can start compiling validation test cases, and their results?
Cheers, Jayant
@jayantmukho : no problem at all. How about 'v&v' to sit alongside the other repos? Will be at https://github.com/su2code/v&v. It will be empty and ready for content. Do you have a directory layout in mind?
A related topic I guess. What about manufactured solutions? @economon, we already talked about this briefly. In the DG solver this is handled via ifdefs, but we could formulate this differently as well. So far there is only one manufactured solution, namely a laminar viscous solution on a unit quad for the compressible equations, but we can think of a more comprehensive suite.
@vdweide : yep, read my mind. I am most interested in this lately. I think it might be the best way to show the code is bug-free. We can start from the one you have but add others (3D triply periodic for instance). Let's make sure there's a folder for mms in whatever structure is set up.
We can chat more about the code side of things.. we had the idea to create a class instead of ifdefs that could be used by any of the flow solvers. Let's follow up separately on that.
@economon I was envisioning something similar to the TestCases folder. With v&v cases grouped according to what they are testing. Something along the lines of:
1) Inviscid Simulations: a) 2D Inviscid bump b) 2D oblique shocks interaction c) ... 2) RANS simulations: a) Flatplate b) NACA0012 c) ... 3) Unsteady simulations: a) Square Cylinder b) ... 4) Turbomachinary: a) ...
And so on. Each of the directories would have sub-directories for different mesh sizes, with configuration files for each mesh level that have optimized parameters for best results. So for example if we are talking about the NACA0012 case, we would have something along the lines of:
a) NACA0012 i) 113 x 33 ii) 225 x 65 iii) 449 x 129 iv) ...
This way we have a family of meshes and configuration files that are specifically built for the purpose of validating the code and comparing with other solvers. I might be useful to compress meshes that are larger than a certain size (say 10MB). We should also put a limit on the size of a single mesh that the repository can handle (say 50MB?).
Within the home directory, the README file should list all the cases in the repository, who the custodian of the test case is (person with meshes in case the meshes are too large), and which version it was last run on.
I thought about splitting it up into Verification cases and Validation cases, but I thought it would be more informative and intuitive to split up according to the physics of the simulations. My thinking might be limited because that's how I have seen the TestCases folder organized, so any other suggestions are welcome.
I think it is imperative that this is accompanied with a section on the SU2 website that showcases just the results of the validation test cases (grid convergence studies, residual reductions etc) and links to the v&v repo appropriately. This way, if people are just inquisitive about SU2's performance, they can get a quick snapshot of the results, without the need to run the cases themselves.
I also wanted to broach the topic of convergence here. Would it be a good idea to standardize the termination criteria wherever possible? For example, in the NACA0012 case we can ensure that residuals are reduced by 8 orders of magnitude for all meshes. Or for the ONERAM6, we use Cauchy convergence and make sure the C_L is converged to 6 orders of magnitude. This would have to be flexible since the same level of convergence might not apply to all the cases in the repository. But for within a test case, I think it would be good to use the same convergence criteria for a family of meshes.
Great idea! Below are some links to databases that I know of. I'd be happy to contribute.
(Some) CFD companies run very large regression tests that can take a long time to complete. Typically, you do not put validation test cases in a regression test (and run it up to convergence) because it will just take too long. A regression test should be <30s or so. At a certain point there will be validation test cases that will take a couple of weeks to run. If you want to make sure that the validation tests are up-to-date and will run with the current version, create a regression test for it: create a coarse mesh setup for it and run it for only 10 iterations and check the residuals. Having said that, it will be nice if there is a general 'run' script that will run all subcases of a single validation case to construct the entire validation and validate the final results with the known/stored solutions. -nijso
ercoftac database: http://cfd.mace.manchester.ac.uk/ercoftac/index.html nparc database: https://www.grc.nasa.gov/WWW/wind/valid/archive.html cfl3d V&V database: https://cfl3d.larc.nasa.gov/Cfl3dv6/cfl3dv6_testcases.html V&V database for turbulence models: https://turbmodels.larc.nasa.gov/ drag prediction workshop: https://aiaa-dpw.larc.nasa.gov/ some cfd-online V&V links: https://www.cfd-online.com/Wiki/Validation_and_test_cases https://www.cfd-online.com/Links/refs.html#validation reacting flow database: https://www.sandia.gov/TNF/abstract.html
The new repo is live here: https://github.com/su2code/VandV (GitHub didn't like '&' in the title).
All members of the developer team on GitHub should have read/write/push access.
Should we use the branching methodology that we use for the SU2 repository? In the sense that we create a develop branch, create branches for any cases we want to add, submit pull requests that can be reviewed by the rest of the community etc.
I like the idea from @bigfooted, of having some regression testing that can run the simulation for a couple of iterations and check residuals (identical to what we do for the TestCases). This might be redundant to the actual regression tests that are done in SU2, but would be an easy first check on weather a particular validation case needs to be run again. Ideally, this would be run before any major version releases, as we discussed earlier.
I can also start working on creating a section on the website for the results of the validation cases. I'll eventually upload a sample validation case with corresponding results that people can model their efforts on.
I think the branching model is useful here too, but at the beginning, why don't you take a shot at an initial structure in the master just to get going, @jayantmukho?
No need to worry about everything being perfect.. we can easily blow it up and reorganize if need be.
After hibernating on this over the winter, I finally had some time to put some work into this. I am in the process of running a number of cases. So far I have results for 2 RANS cases with mesh convergence studies. These can be seen at https://github.com/su2code/VandV
The repo contains the relevant meshes and configuration files for each mesh level. It might not be necessary to have the different config files since the only changes between them are CFL numbers and mesh filenames. We can discuss this.
The folder for each case also has a README.md file that presents the test case and some of the relevant results. This displays the information nicely in the repository page, and is a decent mock up of how it would look, if we decide to put it on the website.
I just wanted to share this to get initial reactions to how I have set the repo up. I haven't included any discussion about the actual results either, but that is something that can be added.
Nice start, @jayantmukho!
Let's see what folks think.. any comments or specific outputs/figures people would like to see? I think, for your first cases, that the grid convergence plots are what will be expected.
Once things are more refined, we can open it up and all start helping you add more cases.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
In progress. You all may have seen that we have a new, V&V tab on the website with results for the compressible FVM solver (MMS, flat plate, bump-in-channel). I have put the DSMA661 case under construction until we can properly run it with v7.
Files are still found in the V&V repo. Thanks again for getting that started, @jayantmukho!
I should have the rest of the Verification cases from the NASA TMR website done using v7 (for which I am using #790 as a proxy) by the end of this week hopefully. Will add them to the V&V repo and the V&V tab.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is still a relevant issue please comment on it to restart the discussion. Thank you for your contributions.
Still being worked on. Updating with v7 results
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is still a relevant issue please comment on it to restart the discussion. Thank you for your contributions.
FWH @BeckettZhou @EduardoMolina Is this FWH implementation is finished in SU2 or its still under development? I would like to run this for my LES jet noise far field noise analysis now. Appreciate your comments!
Hey everyone,
Following the discussions at the SU2 Developers meeting this week, I wanted to start a conversation about compiling a comprehensive set of V&V cases for SU2 that can showcase it's performance in comparison to other solvers.
I think the the NASA TMR website is a good model to base it on. The idea would be to present the V&V case, provide working configuration and mesh files, and provide results comparing performance to other solvers and to higher fidelity data (when available). This allows people to see the performance of SU2 and replicate it, if need be.
The first step would be compiling a list of cases that should be covered. The SU2 2014 SciTech paper would be a good starting point as it already had a couple of validation cases. This list can be bolstered with some of the NASA TMR cases, and with grid convergence studies. I would like to propose an initial list that the community can talk through and make changes as we see fit. I am mostly only familiar with canonical CFD flows that are used in these cases. But it would be great to have other cases, such as Turbo-machinary or FSI cases, that show the full breadth of SU2's abilities. This is by no means an exhaustive list:
It would be ideal for these test cases to have either high-fidelity data (wind tunnel tests, or LES/DNS data), and/or published results of other solvers so that comparisons can be made.
I also want to point out the difference between this and the TestCases repository. The TestCases repo is used in regression tests to ensure that parts of the code don't break when changes are made. This is more a test of SU2's fidelity. It would feature large grids and configuration files that can be run to convergence.