Closed alwinm closed 1 year ago
I like this but have 2 questions
global.h
instead of grid
since it probably needs to be accessed globallyYes--the enum can be used in device code since it's essentially a MACRO with a namespace scope and automatic assignment of values.
I'm definitely okay with moving the enum to /global. I wasn't exactly sure where the best place would be. But currently it is being included in global.h so it's already global. My current reasoning was just that it's primarily used to interface with the grid object and finding things in global can be counter-intuitive.
I think the global.h
vs. grid3d.h
debate might be more a symptom of how the Grid3D
class doesn't just do grid stuff and the actual grid stuff is spread across Grid3D
, Conserved
, and Header
. Having it where it currently is is fine though I think we should have a discussion about a major refactor of the grid vs. integrator/simulation code (or replace the grid with an ECP library??).
I made the changes to my dust code, but I need to be given collaborator access in your fork of Cholla, Alwin, so I can push my changes to your branch! Let me know if you're able to figure that out and I'll go ahead and push the changes!
Question/idea.
With this enum method we would index fields like [idx + enumValue * n_cells]
. Is there a way to just multiply all the enum values by n_cells
so we don't have to include that too?
Also, the magnetic fluxes, interface states, and CT electric fields don't follow the same rules as everything else to save memory. Can there be items in the enum with duplicate values or should I just have my own method for dealing with those oddities? Maybe a magnetic enum?
Actually I had a question for you about that Bob. Are the values that are currently contained in the three magnetic field arrays the cell-averaged values, or the face-centered values? Because if it's the latter then I think that multiplying by n_cells is not correct, since there is an extra face, right?
They're face centered. Multiplying by n_cells is still correct. Since theres an extra grid cell already for other stuff the actual size of the arrays are all the same. The staggeredness of the grid comes in when there's occasional +0.5*dx
floating around
Right now I think none of the temporary fields are in the grid_enum header, right? So maybe that's something we can discuss tomorrow and deal with in a future PR?
As I understand it the grid_enum just provides a bunch of integers for us to index off of. So they should work fine for all the temporary fields since they have the same ordering.
Followup question. Why is the grid one giant array? and not a bunch of individual arrays for each field or a class with a bunch of pointers? i.e. why isn't the C class our primary way of interacting with the grid? That might be better than using enums
I think this will be easier to talk out / explain in person. Let's put it on the agenda for tomorrow.
Is there a way to just multiply all the enum values by
n_cells
so we don't have to include that too?
As I understand it, n_cells is set at run-time and enums are set at compile time and const which is helpful in multiple ways, so I don't know how to accomplish that. Unclear to me whether accessing a run-time computed variable would hit performance or be a pain to pass to the kernel.
Also, the magnetic fluxes, interface states, and CT electric fields don't follow the same rules as everything else to save memory. Can there be items in the enum with duplicate values or should I just have my own method for dealing with those oddities? Maybe a magnetic enum?
If those are not held in C.host/C.device it could make sense for them to live in a magnetic enum. There can be items in the enum with duplicate values, which you could set like this:
a,
b,
b2 = b,
c,
c2 = c,
d,
Or if it's more readable for you
a,
b,
c,
d,
// Aliases
b2 = b,
c2 = c,
My understanding of the giant array is to ensure memory is allocated in a smart way.
Provides an enum object to help with accessing different combinations of scalar fields and adding new scalar/non-scalar fields in the future. enum object automatically calculates NSCALARS based on what is enrolled, and grid_enum::nfields should be the number of fields in grid.
Short-term goal is to allow scalars to be numbered more flexibly.
#ifdef SCALAR
will now activate scalars in general. in order to use what used to be scalar0, use#ifdef BASIC_SCALAR
Currently many areas of the code are consistent with this approach but NOT explicitly moved over. e.g. accessing
momentum_x
is currently written asC.device[1*n_cells]
but would be the same asC.device[n_cells*grid_enum::momentum_x]
. Same holds for magnetic fields, gas energy, etc.Currently nfields throughout the code is NOT using grid_enum::nfields.
@helenarichie please bring grid_enum into DUST and complain to me if you are unable to commit directly to the branch or make a pull request into https://github.com/alwinm/cholla/tree/enum-dev. We should not merge this PR until we make sure DUST is consistent, the other scalars have been converted already.