sampotter / fluxpy

Fast thermal modeling and radiosity in Python.
11 stars 4 forks source link

Time-dependent problem at de Gerlache crater #55

Open sampotter opened 2 years ago

sampotter commented 2 years ago

@steo85it

What has been done with this so far? I would like to push this along

sampotter commented 2 years ago

Added start of example to ./examples/gerlache in a15604778b379111e80331820cbb2c0d2066de99.

This will download Mike's DEM from PGDA (here) and convert it to a .npy file which can be easily memory-mapped.

Next to do: make a Delaunay mesh of de Gerlache.

steo85it commented 2 years ago

I'm running these tests on my side, making meshes at different resolutions, etc ... here is a first plot at 500m/px for max values over 1 month (it seems that eps is not optimal for this resolution, and I had colorbars optimized for Mercury ^^). Else it seems to work (also including occultations from the outer topography). immagine

I'm using https://github.com/steo85it/raster2mesh to generate the meshes, I can check if that can be easily called from your example. Else, it's great that you set up a more structured example/test: we should run the example at different resolutions and compare results (which ones? fluxes? T?) to some ground-truth (non-compressed FF). Also, I should update raster2mesh to get (xmin,xmax,ymin,ymax) as input instead of selecting them manually on a plot (it's useful, but not great to repeat tests consistently).

sampotter commented 2 years ago

Gotcha. I like the look of these plots, they are interesting. How big are these meshes? And where are the scripts you're currently using? It will be helpful if you can share the code you use to call SPICE to generate sun positions.

steo85it commented 2 years ago

I run it with this script from the "applications repo" I shared some time ago. I have to figure out how to unpack this (and especially the large proc_objfil.py which it calls) and include it in a structure such as the one you set up in the ./examples/gerlache (which is quite useful to have as a basis to do it).

steo85it commented 2 years ago

About the code for SPICE, these lines should be useful. These could be useful, too. (Btw, I made a few adjustments to paths, etc... today. I'll push them if useful, but it doesn't change much.) Also, if you have trouble understanding what I'm doing in that mess, feel free to ask (even though it would maybe be easier for me to unpack it).

steo85it commented 2 years ago

How big are these meshes?

This one was a small test. Barely 2500 triangles with a resolution of 500 mpp. Here is another one at 250 mpp (87000 triangles), but I should start to save some statistics (also, there are some imperfections, not sure if related to eps, embree vs CGAL, or what). Let me check if it's easy to integrate things in your cleaner structure. immagine

Also, I noticed that I get in trouble loading point-clouds from Mike's 5mpp DEM (even before meshing my selection, at least on my laptop) ... it's probably best using a downsampled version of it to 20 or 50 mpp (anyway, we won't be able to use meshes down to 5mpp for a whole crater). I'll make one with gdal and load it.

steo85it commented 2 years ago

For a start, I'd replace the content of convert_geotiff_to_npy.py and make_mesh.py by these lines (in one or 2 files, whatever is clearer). I'll do it after dinner, if that's ok for you (we can simplify by skipping the outer/inner mesh, if we don't plan to focus on occultations from far away topography - not sure that's relevant for the paper - and also set the coordinates to cut as input parameters). Then I'll be happy to have some help cleaning it up a bit. ;)

steo85it commented 2 years ago

See updates in gerlache branch in github/python-flux for the steps you mentioned (generate mesh - resolution is argument - and convert to cartesian). Next steps (besides cleaning): generate FF and illuminate using tools from other examples.

sampotter commented 2 years ago

I see the imperfections. Since there are speckles in the first plot, I guess it has to do with using Embree?

So that I understand: to make your mesh, you load Mike's DEM as a point cloud and then use a tool to convert a point cloud into a 3D mesh?

I was planning on using meshpy with a user refinement function which checks the area of each 3D triangle. This way it should be possible to use one threshold inside an ROI containing the crater, and another outside for coarse-grained occlusion.

I have to run some errands until early afternoon today (going to get ultrasound #2 of our baby :o) but will be back later.

steo85it commented 2 years ago

So that I understand: to make your mesh, you load Mike's DEM as a point cloud and then use a tool to convert a point cloud into a 3D mesh?

Almost! :) I load a downsampled (to 50mpp, with gdal_translate ldem_87s_5mpp.tif -tr 50 50 ldem_87s_50mpp.tif) version of Mike's DEM, cut out the part that I need (interactively or not), indeed convert it to a point-cloud, then use Open3d (with Poisson's method) to get the mesh at whatever resolution (>=50mpp, in this case).

I was planning on using meshpy with a user refinement function which checks the area of each 3D triangle. This way it should be possible to use one threshold inside an ROI containing the crater, and another outside for coarse-grained occlusion.

I got a bit annoyed with meshpy, dmesh etc since they are really slow at high-resolution. Open3d is very fast: 73 seconds for 6924777 faces of a 60mpp mesh of the lsp, 9 seconds for 394033 faces of the 250mpp mesh (I haven't checked in detail how good/regular/water-tight the mesh is, but the result usually shows no artifacts nor imperfections). Also, you might get in trouble if you do what you plan to for the outer mesh, because of "holes" at the transitions between outer and inner mesh. I solved it (after a while of trying different solutions, including the one you mentioned) by meshing the whole DEM at the target resolution (as it's fast), then cutting out the inner DEM (it's in the code I linked yesterday, I didn't copy it to the example because I don't think these occlusions are really relevant for the paper).

I have to run some errands until early afternoon today (going to get ultrasound #2 of our baby :o) but will be back later.

That's super cool, enjoy and congrats! (great that they let you in, too, in these covid times) :)

steo85it commented 2 years ago

In any case, I would rather import the GTiff/raster to an xarray, e.g.,

import xarray as xr
ds=xr.open_dataset(raster_path, engine="rasterio")

instead of several simple .npy, to preserve useful information such as reference frames, etc... which one later needs for coordinate transformations.

nschorgh commented 2 years ago

To avoid any interpolation from occurring I simply do this to generate a mesh:

    points_mesh = a[:,:2]   # mesh based on (x,y)  
    triangulation = scipy.spatial.Delaunay(points_mesh)  
    V, F = triangulation.points, triangulation.simplices
    V = np.row_stack([V.T, a[:,2]]).T   # add z to vertex coordinates

where I might create (x,y) from a raster. It's surprisingly fast (used it for millions of points), avoids loss of accuracy due to interpolation, and works for all quad-tree situations. We want to triangulate an existing point cloud, not create any other type of mesh. gdal_translate is presumably good at downsampling a raster.

steo85it commented 2 years ago

Yes, that was the other option: using gdal libraries to read and downsample the mesh within the code

from osgeo import gdal
ds = gdal.Open('input.tif')
ds = gdal.Translate('output.tif', ds, xRes, yRes)

or

import xarray as xr
rds = xr.open_dataset(raster_path, engine="rasterio")
downsampled = rds.rio.reproject(rds.rio.crs, resolution=(500.,500.),resampling=Resampling.cubic_spline)

and then do what @nschorgh suggests. For my applications, I still think my approach is more flexible, but maybe here we can cut part of Mike's GTiff and process it in this way w/o importing many libraries. I can re-implement it in this way, if you prefer, I'm anyway doing something similar for another project.

sampotter commented 2 years ago

I am not convinced by either of your approaches to meshing. Sorry... :-(

Please see the script make_mesh_sfp.py in the gerlache branch. It creates a 40K triangle mesh of the region [-25, 25] x [-25, 25] (stereo x/y) in 42 seconds on my computer. This uses Shewchuk's Triangle via MeshPy with a refinement function should_refine (max area in km^2 of the 3D triangle).

This seems like a reasonable amount of time to wait for a triangle mesh since we will spend much more time on the backend actually solving the problem. Maybe it would make sense to compare the amount of time it takes to construct the mesh with the compressed FF matrix assembly time.

Some details: 1) this loads the original 5 mpp DEM as a memory-mapped file so the whole DEM doesn't need to be loaded to memory 2) it uses linearly interpolation to look up subpixel values from the original DEM 3) it builds the Delaunay mesh in the stereographic x/y plane 4) for refinement, it maps each stereographic triangle to Cartesian coordinates and computes the area

Here is a screenshot of the resulting mesh:

image

That's super cool, enjoy and congrats! (great that they let you in, too, in these covid times) :)

Thanks. We are very excited. :-)

steo85it commented 2 years ago

Not a problem, and whatever you like to use is clearly fine (and usually a better idea). :) Still, for what matters, I think steps 1 and 2 should be improved. I would definitely use the xarray package to load the map, clip it, resample it (we said that we don't want interpolation, and I guess even less a linear interpolation) and eventually reproject it. It's optimized for that and it would make the code cleaner (and it's a really basic package for mapping and such + it has some really cool options for multidimensional datasets analysis and visualization). Once you have the DEM portion you want in memory (steps 1-2), I'm fine with meshing it up as you are doing. (I'm merging a quick example to discuss)

sampotter commented 2 years ago

No problem! Of course if your methods end up being better, we should use those instead! I am not familiar with xarray, rasterio, or Open3D. It takes me a while to pick up new libraries, so I like to stick with what I know if I can. Sorry, some inertia on my part...

I'd like to understand your objection to using interpolation a bit better. I made some plots using a script I just pushed:

image image

The first plot is just the DEM for a small 100x100 patch. The second two plots show the first differences along each axis in centimeters. The third plot shows the histograms of the first and second differences, with the DEM's error bars (as I understand them) included. From the DEM's website, "this 87-90 deg mosaic product has a typical RMS residual of 30-55 cm with the individual fields".

When I look at these plots, my instinct is: "Yes, it's 100% fine to use linearly interpolation to access subgrid values at the original grid resolution. You will incur some error but it is very unlikely to exceed the existing DEM error due to the regularity of the field being sampled. Using anything higher order than linear interpolation is a waste because DEM error dominates and the surface itself is not meaningfully smooth (but it is continuous)."

On the other hand, if you first upsample the grid (say, by a factor of 10), then linearly interpolate to access subgrid values, you will certainly incur an error much larger than the stated 30-55 cm. Of course, this will also happen at non-vertex points if you generate a very coarse mesh. But by linearly interpolating at the original 5 mpp resolution, you can at least have some confidence in the vertices, which means that if you refine, you can expect the generated triangle mesh to conform to the DEM if you go fine enough. If you linearly interpolate from a subgrid this will not happen. If you cubically interpolate from a subgrid, this also is unlikely to happen because the DEM itself is not smooth and hence higher order polynomial interpolation is a bit off the mark.

So in my mind it makes sense to just access the (very high quality!) DEM directly and use linear interpolation to look up subgrid values. This is cheap, easy, and straightforward.

sampotter commented 2 years ago

Here is a couple more screenshots showing what I'm after:

image image

This uses a refinement function which uses a constant target triangle area "inside" Gerlache, and then uses a linear ramp outside of the crater. This is a mesh with 10k faces (20s to make). Probably there's a smarter choice of refinement function, but this gets the idea across.

This is all one triangle mesh. Only a very small number of triangles are used to extend the mesh for occlusion.

Another one with 22k faces. Took 59s to make.

image

steo85it commented 2 years ago

Wow, this looks great, thanks. And if there aren't irregularities at the boundary between the "inner" and "outer" region, it should work well, too (I always had trouble and artifacts appearing because of imperfections at the boundaries, when I tried a similar approach... but probably it was some mistake on my side)! Also, indeed your plots seem to show that there is no problem saving an interpolated version of the data. Ok, if everything already works fine w/o importing other libraries or playing too much with reference frames, I'm also fine with it (especially since we are looking at a specific single case).

nschorgh commented 2 years ago

@sampotter Linear interpolation will dull a ridge, unless the points on the ridge are kept. For our application that would be bad indeed.

Lots of labor often goes into the construction of DEMs. Construction of this particular DEM was a paper by itself. Some spend many hours constructing DEMs from shadows and altimeter foot prints. For one project I'm involved in, someone will spend 1400 hours to improve the DEMs of a handful of crates on Ceres.

So, DEM points can contain precious information. To demonstrate the computational power of a method this doesn't matter, but if one wants accurate shadows and temperatures, giving up original grid points would be an unwanted and unnecessary loss of information.

sampotter commented 2 years ago

I agree with that.

My point is that in terms of dulling a ridge, linearly interpolating the DEM at the original resolution (5 mpp) is much less likely to dull a ridge than linearly interpolating a downsampled DEM (50 mpp). I also don't think there is any reason to use higher order interpolation at the 5 mpp resolution because of noise.

If necessary a list of vertices to fix in the triangulation can be inserted using MeshPy, I think. Probably not worth including for the current paper.

sampotter commented 2 years ago

OK, I put together the start of a simple "spoofed" time-dependent example at Gerlache. Here's a movie of the steady state temperature for some fake sun positions:

https://user-images.githubusercontent.com/6035363/150584376-152e1a08-f159-4106-aedd-8eecb4e18c6d.mp4

This is generated using try_time_dependent_problem.py in the gerlache branch.

Right now I'm using fake sun positions and computing the steady state temperature for the compressed form factor matrix.

To-do list:

For the last point, we will need to come up with a list of ways we can compare. One simple thing to check would be the RMS error vs time between the true FF matrix and the compressed FF matrix for different z values in the subsurface heat conduction. This will let us check how much drift over time is introduced by using the compressed FF matrix.

steo85it commented 2 years ago

Is it normal to get this issue when running try_time_dependent_problem.py with the FF.bin produced by make_form_factor_matrices.py?

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/python-flux/examples/gerlache/try_time_dependent_problem.py", line 71, in <module>
    hit = shape_model.intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 133, in intersect1
    return self._intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 257, in _intersect1
    return self.aabb.intersect1(x, d)
AttributeError: 'flux.cgal.aabb.AABB' object has no attribute 'intersect1'
sampotter commented 2 years ago

The start of the Gerlache crater example now uses flux.model.ThermalModel. See try_time_dependent_problem.py.

I just picked any old physical parameters I could steal from other scripts to get this to work (mostly from examples-old/lsp_illuminate.py). I would like to massage this script into something that will generate the data we need for the paper.

I am going to ignore whether this model makes any sense for now and move on to finishing the setup of the example:

At the same time we should:

Here are some videos from layers 0 through 3. Skin depths are 0.1 mm apart. Color range is from 99 K to 128 K.

https://user-images.githubusercontent.com/6035363/151057841-0e3f8fb9-1adb-4b6e-9724-5ebe10cda5c9.mp4

https://user-images.githubusercontent.com/6035363/151057936-78ab245e-849d-433a-8ee3-4038bf2073e2.mp4

https://user-images.githubusercontent.com/6035363/151058007-939d34a6-9892-4f04-8297-513cf6d0b61c.mp4

https://user-images.githubusercontent.com/6035363/151058041-be07bfb5-3a8c-4ee5-8406-1f1a4173ea8b.mp4

sampotter commented 2 years ago

Is it normal to get this issue when running try_time_dependent_problem.py with the FF.bin produced by make_form_factor_matrices.py?

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/python-flux/examples/gerlache/try_time_dependent_problem.py", line 71, in <module>
    hit = shape_model.intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 133, in intersect1
    return self._intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 257, in _intersect1
    return self.aabb.intersect1(x, d)
AttributeError: 'flux.cgal.aabb.AABB' object has no attribute 'intersect1'

I added a new function intersect to that class... Make sure to pull all changes and rebuild everything. Also double check that the AABB class does indeed have intersect, otherwise something got out of sync...

nschorgh commented 2 years ago

I would do the error/convergence test for de Gerlache with the equilibrium temperature - that's much faster and to the point.

steo85it commented 2 years ago

Are those at equilibrium? Probably not, and do we want to comment on getting there, in the paper?

sampotter commented 2 years ago

No, definitely not at equilibrium. Getting to equilibrium is outside the scope of the paper.

I would do the error/convergence test for de Gerlache with the equilibrium temperature - that's much faster and to the point.

I'll make a note to do these, too.

Since the claim is that we can use the compressed FF matrix to solve the time-dependent problem more quickly, I think it would be useful to try to get a rough idea of what the error incurred by doing so is. I think it should be quite small.

nschorgh commented 2 years ago

With equilibrium I simply mean epsilonsigmaT^4 = B (the surface temperature is in equilibrium with the incoming radiance, no subsurface heat flux). Convergence with spatial resolution.

sampotter commented 2 years ago

Yes of course --- we are talking about two different things. You were suggesting plots made w/o subsurface flux and Stefano was just asking whether the simulation had run long enough for the temperature to reach equilibrium in the videos I attached.

steo85it commented 2 years ago

a2504bc should address using real sun positions from SPICE when this option is True (it's not yet thoroughly tested, and feel free to change things you don't like, but the results look reasonable and if vertices are given in cartesian coordinates, it should be fine)

sampotter commented 2 years ago

Here's a quick error test without subsurface heat conduction... Need to figure out what the most sensible way to do this is.

To make these plots, I sampled the temperature values on a regular grid by setting up an orthographic camera pointing at the crater, and traced a ray for each pixel. I then look up the temperature value based on which triangle was hit. I am not sure this is the best way to do this, but this is the first idea I had.


Test 1

Here is T with the max inner area set to 0.4 and the max outer area set to 3.0, compared between the compressed FF matrix with tol = 1e-2 and the true FF matrix (percentage error, i.e. 100*|\hat{T} - T|/|T|):

image

The same thing but with tol = 1e-1 instead of 1e-2:

image

In both cases, the max % error is set to 10 on the colorbar, so actually there is quite an improvement here when using a lower tolerance.


Test 2

The same as the first plot from Test 1, except that for the compressed FF matrix we instead set the inner area to 0.75, so that a different mesh is used (again, % error clamped to 10%):

image

You can see wherever there is a ridge, there is quite a lot of error in the solution. This is understandable. On the ridge, the orientation of the triangles will change significantly, which alters the solution there dramatically. Meshes for two different finenesses only roughly agree in space. Comparing solutions on two different meshes isn't straightforward, I don't think. Even if you use a mesh taken from a regular grid with different grid spacing, you will have the same problem.

To address this, we should probably use curved elements and higher-order basis functions (e.g., linear at least) instead of flat triangles + constant basis functions. This is a substantive amount of work which could be addressed in another paper, I think. In such a paper we would also want to decide what the correct way to "conservatively interpolate" solutions between meshes is.


I think it makes sense to include Test 1 but not Test 2. That is, we should check the agreement between the true FF matrix and compressed FF matrix for different tolerances, but doing a convergence study at this point may be premature.

What do you guys think?

sampotter commented 2 years ago

There are two separate orthogonal questions here:

  1. Whether triangles + constant basis functions + our view factor approximation (midpoint rule) is a reasonable discretization of the radiosity integral equation for the thermal modeling task at hand
  2. Whether our algorithm accurately approximates the true system matrix for the discretization in (1)

The answer to (2) is "yes" based on numerical evidence we are able to provide.

The answer to (1) is outside the scope of the paper, IMO.


Improving on (1) is a separate line of research.

In general, if the orders of accuracy of the ingredients in (1) are improved, our approach is still relevant and useful and should provide an affirmative answer to (2).

IMO, in this way, the focus of the current paper should be as a proof of concept demonstrating that the answer to (2) is "yes".

sampotter commented 2 years ago

a2504bc should address using real sun positions from SPICE when this option is True (it's not yet thoroughly tested, and feel free to change things you don't like, but the results look reasonable and if vertices are given in cartesian coordinates, it should be fine)

This is perfect, thanks!

steo85it commented 2 years ago

I think it makes sense to include Test 1 but not Test 2. That is, we should check the agreement between the true FF matrix and compressed FF matrix for different tolerances, but doing a convergence study at this point may be premature.

I would agree that our goal is to assess the performance of the FF compression (1), rather than of different kinds/choices of meshing (that would be 2, I think, and I agree that's not in the scope of this paper). Also, why can't one "simply" compare the T(FF) and T(compressed FF) by getting the norm of the differences of the resulting arrays? We should have them for both as output of the model, right? I'm probably missing something.

sampotter commented 2 years ago

Also, why can't one "simply" compare the T(FF) and T(compressed FF) by getting the norm of the differences of the resulting arrays? We should have them for both as output of the model, right? I'm probably missing something.

Yes, we can do this, and we will do this. We could make a plot of this value for varying compression tolerance and mesh size (there are some plots in the Overleaf doc already along these lines); or maybe better, just a table of values, since the paper is going to become overburdened with plots.

The plots above in the case where the meshes are the same size and T is directly comparable are just for visualization purposes (unlike the third plot, where I try to compare directly the fields obtained from different meshes). I think it's helpful to get some visual sense of the error; where it is largest, whether there is some spatial correlation, etc.

Does that answer your question?

steo85it commented 2 years ago

Mmm, not really ... what I meant is that we have T(FF,x,y,z,t) and T(cFF,x,y,z,t) and we can compare and plot those, with associated coordinates to get the same result that you plotted above (I think). What you did is ok, btw, but since you were wondering if there were "simpler" alternatives...

sampotter commented 2 years ago

How would you go about making that plot?

steo85it commented 2 years ago

Erase this, I've been messing up examples... different config, sorry... let me think a bit more about this. We are starting from a mesh here, not obvious how to consistently get back to a raster (e.g., https://github.com/jeremybutlermaptek/mesh_to_geotiff). Ok, got the issue, pardon. On the other hand, we should be able to convert a point cloud x,y,elev --> x,y,T and then make the comparisons we need on that level. But ok, the issue is to which x,y do we associate T (vertices? centers of facets?).

steo85it commented 2 years ago

Also, are your plots in stereo projection, or what are the axes coordinates? It would be good to have those in stereo, for the paper (either by keeping the mesh itself in stereo, and just converting to cartesian for the illumination computations - or else by reprojecting before plotting).

sampotter commented 2 years ago

But ok, the issue is to which x,y do we associate T (vertices? centers of facets?).

This has a clear answer. In our discretization of the radiosity integral equation, we use constant basis functions. So, we think of T as being a piecewise constant function defined over the triangulation (ditto the insolation and the radiance). This is not well-defined over edges and vertices (which incident face should we take the T value from?). The way I resolve this is simple: just use whatever triangle the raytracer tells me I hit. In practice, CGAL never says, "oh, you hit an edge---which triangle do you want?". I don't know how it resolves this ambiguity and I don't really care.

Also, are your plots in stereo projection, or what are the axes coordinates?

The axes currently are (x, y) Cartesian. I set up a grid of rays beneath the crater with ray directions all equal to (0, 0, 1). The origin of each ray is of the form (x + h*i, y + h*j, z), for fixed x, y, and z values. (To avoid confusion---by "beneath" I mean outside of the moon, point towards the north pole. So, "beneath" as if I were looking at the moon with the north pole oriented upward, i.e. the ray direction points from the south pole to the north pole.)

Let me think about how best to do a plot like this in stereo projection.

sampotter commented 2 years ago

OK, plotting in stereo was actually pretty easy:

image

Simple way to do this:

For this plot I just made a CgalTrimeshShapeModel to look up hit points for a grid of rays. But there's probably some scipy or matplotlib convenience function that will allow you to make this same plot...

steo85it commented 2 years ago

Yes, that's the "inverse" of what I was suggesting or usually doing (I usually keep V in stereo, and convert them to cartesian just for the illumination computation, but it works all the same): one just associates different vertices coordinates to the same indexes (and hence facets). Do we then get the same result if we associate T to the center P of each triangle? In that case, we can easily do what I was suggesting yesterday (producing GTiffs and then read and compare them with whatever tools, or directly have raster xyT in memory and make computations among them, differences or normed differences or whatever we want). If we always use the same mesh, we don't even need to interpolate, else we might need to. Or do you think that with different meshes (do we really need to compare among them? Or just results of different FF based on the same mesh) your approach would still be better/faster/more accurate?

sampotter commented 2 years ago

Do we then get the same result if we associate T to the center P of each triangle?

You can think of associating T to the center of each triangle if you like.

I find it helpful to think of T as a piecewise constant function defined on the facets of a piecewise linear surface in 3D.

The two can be converted back and forth in the obvious way.

In that case, we can easily do what I was suggesting yesterday (producing GTiffs and then read and compare them with whatever tools, or directly have raster xyT in memory and make computations among them, differences or normed differences or whatever we want). If we always use the same mesh, we don't even need to interpolate, else we might need to. Or do you think that with different meshes (do we really need to compare among them? Or just results of different FF based on the same mesh) your approach would still be better/faster/more accurate?

There are a few things.

I find the GeoTIFFs hard to work with because I haven't learned the format. They slow me down a lot. I am also not really sure what they bring to the table for the purposes of the current paper.

If we want to think of T as a list of point values defined at the centroid of each triangle, then we can just work with these values directly (i.e. with the numpy array that is the result of e.g. compute_steady_state_temp). Indeed, if we always use the same mesh, no interpolation is necessary. And the thing I'm doing to make the plots above isn't necessary, either.

What I'm doing above does let us compare between different meshes. I am not at all sure this is a sensible thing to do. I think we should probably just leave it alone for now.

That said, I think it is helpful to make image plots like the above to give a qualitative sense of the solution. I like this approach because it computes a separate value for each pixel in an image. If we treat T as a bunch of point values, it isn't clear to me how we make such a plot. One way to go would be to make a scatter plot, but I think this is a bit confusing and unpleasant to look at. The pictures I made above are clear: they are just the result of taking a picture of T (in its "piecewise constant function on triangle mesh in 3D" mode).

Please let me know if I'm missing something in your previous response.

steo85it commented 2 years ago

Great, thanks for the clarifications! Storing stereo coordinates and associated T in arrays (with more or less complexity), we should be able to plot maps such as this one (it shows T differences among epochs/cycles, but it's along the same lines) immagine I need to try with the gerlache's output, but it should work, I think ... (the white areas are meant to be there and not an artifact)

sampotter commented 2 years ago

Can you explain the plot a bit and how it differs from the plot I'm making? I need some more context.

Why is there "shot noise" (white pixels) throughout?

steo85it commented 2 years ago

Sure, sorry. It's quite close to what you are doing (which is good) but based on those T(t,z,x,y) arrays I was mentioning, which might make it easier to compare and compute differences between different outputs and configurations. Those arrays can be exported to GTiffs (if useful for storing, avoiding pickle issues), but also simply "used" as arrays within the script. Now, if you are satisfied with your solution, that's perfectly fine (as you pointed out, the results are similar). If not, this might be an alternative approach.

Why is there "shot noise" (white pixels) throughout?

Those are PSRs which I intentionally removed from the whole computation. They should be like this, not an artifact. :)

sampotter commented 2 years ago

OK. How do you make a raster for that plot? I'm still confused about that part.

steo85it commented 2 years ago

ok, let me try with an output of gerlache and see if/how it works, and we can better compare and discuss. :)

steo85it commented 2 years ago

Nope, more difficult than I expected: it would need regridding and such. A pity, it would have been convenient for I/O and computations

sampotter commented 2 years ago

OK, no problem. We have a way of making the necessary plots for now which is enough for the current paper. Definitely the GeoTIFF format seems useful for the future.