Closed prisae closed 4 years ago
I'd skip the layered-earth model, we have all validated our codes against analytic solutions in the previous papers. This saves probably one page, which would beneficial regarding the issue about the scope of this paper.
Meshing the salt model will be lot's of fun, but I'll figure something out. What about the others?
@ Octavio - can you handle land-based / airborne setups easily?
I would also skip the first simple case.
Yes, it would be very interesting to solve this problem and compare the results. Go for it!
@Moouuzi Yes, they can be easily handled. I have already done some experiments on it.
The reason I thought about another model is discretization. A model like the SEG/EAGE one mentioned above, which comes in rectangular blocks, is obviously easy to model for finite differences codes. However, to also show the full power of finite element codes it would probably be good to have a complex model, where FE codes can nicely mesh the model, but the FD codes have to do some averaging to get the job done. Say some dipping horizons or similar.
@prisae I agree with your last comment. So, yes I vote to include a model like SEG/EAGE.
I agree to Lindays that 3 models will take a lot of or rather too much space. Considering all the comments, what's about:
Anyway, I tried to get an overview of this salt model, but I haven't figured out so far which one of the data in the zip directory are relevant. I assume the files contain something like a regular grid with an ordered list of markers and conductivities? As the last time I worked with segy files is 5 years ago, I could need some help to get a first overview about the model.
@ Dieter. Probably, you can export the grid/ conductivities to numpy arrays or maybe you have already a vtk available?
I did some work together with Bane from PyVista, see here for an interactive display of the SEG/EAGE model: https://nbviewer.jupyter.org/github/pyvista/show-room/blob/master/seg-eage-3d-salt-model.ipynb
Thank you Dieter. I will check the data.
Input from Kerry Key:
One thing that the community could really use is more test models for various scenarios, and having them be easily accessible. A few years ago Alan Jones and others led a 3D modeling workshop in Dublin for MT and they created a suite of simple 3D block models to be used for code comparisons. Even though they were quite simple 3D models, it was a good exercise to see how the various codes worked. And of course how each code work changes over time as improvements and new features are added, so one of the lasting things from the paper could be the establishment of some really good test models and some baseline responses for them. The idea being these could be community test models that are used as a baseline for future code comparisons for say the next decade or more. I would encourage thinking about including relatively simple models that play to the strengths and weakness of the various codes (e.g., blocky models good for fast structured grid codes and slanted structure or topography that is better handled for unstructured meshes, etc). the Simple models are good for assuring that the codes and meshing discretization are pretty accurate. Super complicated industry style models would be interesting too, and will bring up discretization, accuracy and efficiency issues.
I really suffered recently from the accuracy of 3D modelling. And the problem is, that we can only check them by comparing it to 1D/2D/3D modellers. By 2D/3D modellers we can just guess which one is better, by 1D modeller we know which one is better. So maybe it would be good to shift the focus a bit on benchmarking. Because lets be honest, there are dozens of 3D EM/CSEM modeller out there, almost every research group has one, judging from publications. So we should focus on open/python, and reproducibility.
As such I vote again for including a 1D model, but not just. A 1D model as a background for a block model. So you can compare only the background, or the whole model. We don't have to put the 1D result in the paper, enough if we provide it online. I created such a model, by taking the "Dublin Test Model 1" from the MT benchmark paper (Miensopust et al.), adjusting the distances to move it from an MT problem to a CSEM problem.
=> The model is in the notebooks directory, have a look.
Maybe we can replace the SEG/EAGE salt model by this recently open-sourced Marlim R3D CSEM model, what do you think?
Here we'll need ideas from custEM/PETGEM, I am sure you have a nice model with dipping layers, topography (land CSEM?) that Lindsey and I have to squeeze into blocks.
Just a few ideas.
In general, I believe that the approach and order of models are adequate. My comments are:
I agree to include the Dublin Test model. In this way, we validate all codes for a simple case. I have reviewed the model and it seems a good option.
Yes, this model is interesting and we shouldn't have much trouble reproducing it. Comparing our codes with industrial applications will enrich the paper.
This is a complicated issue (same as the model). We currently have a small number of models and it seems to me that none would meet the proposed requirements. However, we could define a synthetic case although for this we would not have reference to compare the results, which could be a weakness if we do not justify it correctly. I will talk with some colleagues and maybe we can propose some realistic model for which we have a solution (we cross our fingers).
More ideas?
first model is fine!
Maybe Octavio has other solutions for meshing such models, but I don't know if the MARLIM model is better suited than the EAGE salt model for our tetrahedral codes. I haven't found any information about the mesh/conductivity distribution, boundaries of layers etc. for own mesh generation following the links. Anyway - same issue as for the salt model - generating a realistic tetrahedral mesh comparable to "gridded" meshes is barely feasible.
Realizing good tetrahedral meshes for such complex scenarios with several 3D intersections requires, at least for me, too much time as I'm not familiar with professional CAD and meshing software so far. The only "simple" option I see is discretizing the domain in tetrahedra of comparable size and nearest neighbor interpolation. At greater depths, this kind of averages the conductivities, even though close to the surface such an approach is not always applicable as some of our modelings showed.
I'd be glad if we decide for either one or the other model soon. We/I could only start with the tough exercise of mesh design if consistent information about the geometry is available to all of us. Runnnig the simulations afterwards is probably the easy excercise for all of us.
As an alternative, do you know about other mining areas and models which are of global interest or do you prefer other scenarios?
I agree with Raphael. First, we must define the geometry of the model to know if we are all capable of generating the input for our codes. For complex scenarios, I use the technique that Raphael mentions (nearest interpolation), but its effectiveness will depend on the challenge of the model.
Unfortunately, I have no knowledge about mining models
How do you guys (@Moouuzi , @ocastilloreyes) then usually create a model? What is your normal workflow?
@prisae, in general terms, my workflow is as follows:
Thus, the input data that I need are:
I remember MARE2DEM has some automatic meshing - is there something similar in 3D?
I agree, I got quite desperate lately with the meshing part (on tensor meshes). The 3D results depends so much on it. So much that I have come to believe that most published 3D CSEM result has probably more likely a relative error of 3-10%, rather than 1% or lower, what you would aim for...
But I think this should also be a focus. There are so many 3D CSEM codes in the world - look at all the publications, many many research group have some sort of 3D CSEM code at hand, because just the solver in the end is not too difficult. How you then actually create a model and discretize it, that is a much more difficult task, and the reason why we need benchmarks...
I agree, too. In my case, the meshing strategy depends on the frequency, the conductivity values, and the polynomial basis order to be used (in mi FEM code, I support polynomial-variants: 1,2,3,4,5, or 6). And yes, with my modeling results I have verified that the quality of the mesh positively / negatively impacts the electric field results.
I think that we would have to define the basic inputs for each code and from that, try to build a "mini" benchmark that is compatible with each tool. Although I am afraid that this is not possible ... other ideas?
What's the final decision about models? Are we done with
or do we want to model something else (Figure 5 of Marlim, something non-marine:-)). I think, for the paper, these examples with proper Code and model descriptions are already enough Content and consumed already enough time.
For my point of view it is enough, at least for now. Comparing results is not the only point of the paper, IMHO.
My opinion: For now it is enough: I shall draft a paper, and pass it around one by one (fingers crossed by end of February). After that we will see the length of it, and can still decide whether or not we need another example. But I guess it is already going to be lengthy as it is.
Does anyone has a strong, differing opinion (@Moouuzi @ocastilloreyes @lheagy)?
Thanks @Moouuzi for all the help with the FE models.
(At some point [after the paper is written], we shall streamline all dataformats, suspecting to put it into NetCDF4(HD5) format, probably in EMGS style file format, given they are CSEM-like data; I'll look into that, unless someone of you has already experience with that.)
I think that the content is enough to meet the main objective of the paper, especially if we can highlight the points mentioned by @prisae.
Next week I will be in a workshop and project meeting in Mexico. Still, I will try to catch up with the models, especially with Marlim3D ( which scared me :-) )
Good holiday Luis
Which numerical models should we show?
I suggest:
(2) will probably favour regular meshes, as the model is in a regular mesh. To equalize that, (3) should be a complex model, where the regular meshes will have to do some interpolation.
I don't think in this paper we have to show the latest and greatest of modelling; but rather things which ALL modellers can do, and mention if some modellers can do more in one direction or in another etc.
For the above reason I would also limit it to the frequency-domain at the moment, but mention which codes have time-domain built in as well etc.
Raphael: