Open dlg0 opened 9 years ago
@dlg0
The nubeam component in fastran-ips was designed for steady-state solution, i.e. it calls nubeam many times until beam ions saturates, then apply time-average to reduce MC noises. It differs from the SWIM nubeam component that calls nubeam one time for ps%t1 - ps%t0 , designed for time-dependent simulation. So it is not a surprise that you have problem when you call the fastran nubeam component in time-dependent TSC simulation.
And your error resulted form that your plasma state has zero beam power.
@parkjm Of course there were going to be problems in the application of the fastran nubeam wrapper, but that's OK. I can work on those (or just give up and use the CSWIM nubeam wrapper).
Can you explain how / why zero beam power causes this error?
Also, how does the ips-fastran nubeam wrapper work for time dependent fastran?
@dlg0 I thought zero beam power raised nubeam crash, but looking at more details, the error actually looks like happened during re-calculating main ion density based on charge balance when beam ion density is updated. This routine (znubeam.py) assumes that nspec_imp0 = nspec_impi, which looks like not the assumption of TSC.
@dlg0 To test, nspec_imp0 was replaced by nspec_impi in dlg branch. Not sure if it will work. Let me know if you had a chance to test. For time-dependent fastran, we can just use the SWIM nubeam wrapper. If fastran nubeam wrapper is needed for some reason for time dependent run, set NSTEP = 1, NAVG = -1 in innubeam.
@parkjm So I did some testing and it looks like the problem was that nrho was not equal to nrho_nbi, but that the code assumed they were equal. I've put in something to take the nbi density and interpolate it where required. It's a rather terrible hack since (although somewhat similar to your cell2node function) that should later be replaced with the plasma state API interpolation routines. I put back nspec_imp0 since that seems to work fine. See the latest commit to the dlg branch of ips-fastran.
The one thing I did notice was that the beam_density is all zeros, and I'm yet to figure out which component or input file sets that.
@parkjm Oh, I forgot, the reason to use this nubeam wrapper for the time dependent run is so that we can use the same input files (fastran style in) for both the TSC and FASTRAN runs in the benchmark. Matching sets of input files by hand, i.e., one style for a TSC based run, and then the in\ files for the FASTRAN based run is something that has been the source of difficulties, so if we can remove it, we should.
Also, I really don't see any reason why each component should ever have more than one wrapper. Your wrapper is clear, pretty close to generalizable for all cases, easy to develop on, so with the python plasma-state API imminently available, I think moving yours towards being the standards for plug and play wrappers is the simplest way to standardize and reduce errors associated with new IPS runs - especially as I move towards using OMFit to generate runs. Let me know if there is any reason you think this is not a good idea.
@dlg0
Good to know that znubeam.py does not work when nrho != nrho_beam. Should be fixed. Thanks.
For sure, I like the idea of moving the components in ips-fastran towards being the standards for plug and play wrappers :)
Just general comment: I’ve noticed that people tend to think an ips component as one-to-one mapping to the physics code. But I think that the component is an abstract of workflow, rather than executing code. For example, we use TOQ equilibrium code both in core transport simulation and in EPED. These two TOQ components are totally different even though using the same physics code. We can make TOQ component cover both workflows in one implementation of the IPS wrapper, then many if-else, try-except, … that would make the wrapper nothing more than traditional TRANSP-type integrated modeling code — hard to manage and improve. Having many similar components for one physics code, however, is also issue as you pointed out. So we need a kind of trade-off. Generally, I’d like to encourage multiple components for one physics code to enrich our physics capabilities.
@parkjm
Here I think I'm going to have to disagree - albeit my disagreement may still be a naive one as I'm still somewhat new to IPS wrappers. I would think that there is a separation between what a wrapper script should do, and what the driver script should do (here the driver being a component that defines how the components are used within any given workflow / simulation). I see the wrapper as only doing those things required to run the component code given a plasma-state, i.e., init any variables it's going to write to the plasma state, create an input file from the plasma state and any input files, and run the code. I see the driver as the logic that determines how a component is used. In your TOQ example, there would be parts of the TOQ wrapper that are independent of the workflow. I'd claim that those belong in the wrapper. The parts that are workflow depended belong in the driver. I've argued this point several times in the past with @batchelordb .
The consequence of mixing workflow logic with the wrapper scripts is that we end up with many wrapper scripts. Many wrappers has translated to many unmaintained wrappers. With one of the primary goals of the IPS to be a simple modularization of the component physics codes, and with a goal of AToM being to be able to setup IPS simulations via OMFit, I think that having each components workflow independent logic be the domain of the component wrapper, and how that component is used being the domain of the simulation driver is the way to go. In the OMFit modules there is IPS configuration data, which has to be workflow independent, so a config file could be generated from that. With re-use being paramount in the OMFit modules, you can't really add workflow dependent data to the OMFit module for a specific code, and you wouldn't want to. I'd think that would be done when you build a workflow from those components, as should be the case for the IPS.
I'm envisioning a persistent set of IPS wrappers, one for each code, and then a template set of drivers that illustrate how various workflows can be constructed. I think this is a good approach to creating a maintainable set of IPS wrappers. It is straightforward to maintain ones own set of wrappers, and have one for each case. But that does little for the larger picture of the next person being able to use those wrappers and construct a new workflow with them without having to re-write their own set of wrappers.
Thoughts?
I'm envisioning a persistent set of IPS wrappers, one for each code, and then a template set of drivers that illustrate how various workflows can be constructed. I think this is a good approach to creating a maintainable set of IPS wrappers. It is straightforward to maintain ones own set of wrappers, and have one for each case. But that does little for the larger picture of the next person being able to use those wrappers and construct a new workflow with them without having to re-write their own set of wrappers.
I think I'll disagree here, and I also think I'll agree with @parkjm on this. Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers, since many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs. What we have in the IPS is the notion of a PORT as the abstrtaction for the kind of physics that the driver interacts with. This port could in theory be implemented using different codes (that assumption may not be very realistic considering the nature of the codes we have). A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
The problem may be in the way we talk about a "genray" component, instead of "genray_XYZ" and genray _ABC" components where the component wrapper is geared towards a special use of the underlying physics code. I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another. I believe that's what has happened historically with scripts originally designed for a particular use of the underlying code is "abandoned" when that particular use is no longer part of an active simulation campaign, and another wrapper is written that appears to also wrap the same code, but now targets different functionality.
@dlg0
For additional clarification:
I should see the wrapper as only doing those things required to run the component code given a plasma-state, i.e., init any variables it's going to write to the plasma state, create an input file from the plasma state and any input files, and run the code.
Yes. However, we need to “run the code” in various ways. So any wrapper script already represent a specific workflow. I don't think we can avoid "mixing workflow logic with the wrapper scripts” Moreover for the TOQ example, two TOQ components work even with totally different plasma state files.
So how does this approach lend itself to the idea of modularity? e.g., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
This capability is workflow independent and would cover the use case of using say genray for multiple ports, i.e., just set the IC or EC switch in the config.
two TOQ components work even with totally different plasma state files.
I'm not sure I understand how two different TOQ components use totally different plasma state files? I assume you mean the components are not using the actual PPPL plasma state, but some other file format that is listed in the plasma-state files list?
I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I'd think that the TSC / Fastran benchmark presents a clear use case. The approach has to been to, by hand, create two different sets of different input files to different wrappers for the same component codes that ideally do the same thing (if they didn't we would be benchmarking the two workflows). Surely using the same input files and the same component wrappers is a better way to do that - it certainly would have avoided many headaches? If not, what is a better way?
Another use case might be the way Fastran steady state averages over Nubeam results. I'd claim that the averaging operation belongs in the driver script for the Fastran workflow, rather than in the nubeam wrapper.
I think Wael made the salient point here. The wrapper should be designed to present a certain physics functionality (port). Any given physics code might be used in very different ways. I think those would be different ports, and I think it would be natural for them to have different wrappers.
However, from a code maintenance perspective, I can see that there might be some advantage to minimizing the number of wrappers that wrap a given physics code. The argument here is that when things change in the underlying physics code (i.e. they change their input or output formats), you're more likely to keep the wrapper(s) working if you have to make changes in a smaller number.
In the CCA world (from which the IPS component model is derived), it was certainly possible for a give chunk of code to expose multiple ports. Is it reasonable to work out how to do this in the IPS too, with the wrappers? Does it help the maintenance problem or hurt it?
On 05/20/2015 02:03 PM, David L Green wrote:
So how does this approach lend itself to the idea of modularity? i.e., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
This capability is workflow independent and would cover the use case of using say genray for multiple ports, i.e., just set the IC or EC switch in the config.
two TOQ components work even with totally different plasma state files.
I'm not sure I understand how two different TOQ components use totally different plasma state files? I assume you mean the components are not using the actual PPPL plasma state, but some other file format that is listed in the plasma-state files list?
I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I'd think that the TSC / Fastran benchmark presents a clear use case. The approach has to been to, by hand, create two different sets of different input files to different wrappers for the same component codes that ideally do the same thing (if they didn't we would be benchmarking the two workflows). Surely using the same input files and the same component wrappers is a better way to do that - it certainly would have avoided many headaches? If not, what is a better way?
Another use case might be the way Fastran steady state averages over Nubeam results. I'd claim that the averaging operation belongs in the driver script for the Fastran workflow, rather than in the nubeam wrapper.
— Reply to this email directly or view it on GitHub https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-103975931.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491
So how does this approach lend itself to the idea of modularity? i.e., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
I imagine the set of possible ways genray (or any other code) can be used in a simulation is (a) finite and (b) not that large. If that assumption is not true, and pretty much each workflow may/could use genray in a way that is truly different from all other wrappers in existence, then indeed we have a problem.
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
But doing this requires the drivers to know that the PORT they're using actually map into genray nd also require the driver to know how to manipulate Genray specific input files to make it do what the driver wants (since not everything is in the PS). This "hard wired cross-component" knowledge was something we explicitly tried to stay away from in the IPS design. The view of the driver as encompassing ALL the logic of the simulation is something we didn't set out to do.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
So I think there's a judgement call to be made here as to when those changes can be kept to a single wrapper, and when they are better represented by more than one. There is no single correct answer for all codes, but I think insisting on ONLY one wrapper will inevitably produce a very complex hard-to-maintain piece of code.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
But how much of that trouble was due to lack of documentation as to what each of the wrappers actually did (or didn't do)?. I don't think switching to a single uper-wrapper will solve the problem, since you'd still need to know how to set all these config flags and/or manipulate input files in the driver to to make it do what you want.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I don't think this separation is possible in the IPS, especially for those developing new components and/or workflows. To truly understand what's going on (if you want to) you'll always have to understand what the driver is doing, what the config file entries mean, and what the wrappers do. I actually think that using multiple wrappers that implement different PORTS (and as a result probably not called from a generic driver) makes it easier to build IPS workflows, since you're less likely to connect the driver to the wrong component.
One thing we may consider is adding support for the component to specify which PORTS it actually implements (probably in he config file), and use that information to make sure a driver doesn't connect to the wrong component. This is something we did in the CCA, but in SWIM the connection is kind of one way.
When can we sit down and talk about this?
DBB
On May 20, 2015, at 2:32 PM, bernhold notifications@github.com wrote:
I think Wael made the salient point here. The wrapper should be designed to present a certain physics functionality (port). Any given physics code might be used in very different ways. I think those would be different ports, and I think it would be natural for them to have different wrappers.
However, from a code maintenance perspective, I can see that there might be some advantage to minimizing the number of wrappers that wrap a given physics code. The argument here is that when things change in the underlying physics code (i.e. they change their input or output formats), you're more likely to keep the wrapper(s) working if you have to make changes in a smaller number.
In the CCA world (from which the IPS component model is derived), it was certainly possible for a give chunk of code to expose multiple ports. Is it reasonable to work out how to do this in the IPS too, with the wrappers? Does it help the maintenance problem or hurt it?
On 05/20/2015 02:03 PM, David L Green wrote:
So how does this approach lend itself to the idea of modularity? i.e., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
This capability is workflow independent and would cover the use case of using say genray for multiple ports, i.e., just set the IC or EC switch in the config.
two TOQ components work even with totally different plasma state files.
I'm not sure I understand how two different TOQ components use totally different plasma state files? I assume you mean the components are not using the actual PPPL plasma state, but some other file format that is listed in the plasma-state files list?
I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I'd think that the TSC / Fastran benchmark presents a clear use case. The approach has to been to, by hand, create two different sets of different input files to different wrappers for the same component codes that ideally do the same thing (if they didn't we would be benchmarking the two workflows). Surely using the same input files and the same component wrappers is a better way to do that - it certainly would have avoided many headaches? If not, what is a better way?
Another use case might be the way Fastran steady state averages over Nubeam results. I'd claim that the averaging operation belongs in the driver script for the Fastran workflow, rather than in the nubeam wrapper.
— Reply to this email directly or view it on GitHub https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-103975931.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491 — Reply to this email directly or view it on GitHub.
I'm available this week and next. My calendar should be reasonably current. We need to include JM in this discussion, obviously, so probably an afternoon time slot.
On 5/20/2015 3:43 PM, batchelordb wrote:
When can we sit down and talk about this?
DBB
On May 20, 2015, at 2:32 PM, bernhold notifications@github.com wrote:
I think Wael made the salient point here. The wrapper should be designed to present a certain physics functionality (port). Any given physics code might be used in very different ways. I think those would be different ports, and I think it would be natural for them to have different wrappers.
However, from a code maintenance perspective, I can see that there might be some advantage to minimizing the number of wrappers that wrap a given physics code. The argument here is that when things change in the underlying physics code (i.e. they change their input or output formats), you're more likely to keep the wrapper(s) working if you have to make changes in a smaller number.
In the CCA world (from which the IPS component model is derived), it was certainly possible for a give chunk of code to expose multiple ports. Is it reasonable to work out how to do this in the IPS too, with the wrappers? Does it help the maintenance problem or hurt it?
On 05/20/2015 02:03 PM, David L Green wrote:
So how does this approach lend itself to the idea of modularity? i.e., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
This capability is workflow independent and would cover the use case of using say genray for multiple ports, i.e., just set the IC or EC switch in the config.
two TOQ components work even with totally different plasma state files.
I'm not sure I understand how two different TOQ components use totally different plasma state files? I assume you mean the components are not using the actual PPPL plasma state, but some other file format that is listed in the plasma-state files list?
I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I'd think that the TSC / Fastran benchmark presents a clear use case. The approach has to been to, by hand, create two different sets of different input files to different wrappers for the same component codes that ideally do the same thing (if they didn't we would be benchmarking the two workflows). Surely using the same input files and the same component wrappers is a better way to do that - it certainly would have avoided many headaches? If not, what is a better way?
Another use case might be the way Fastran steady state averages over Nubeam results. I'd claim that the averaging operation belongs in the driver script for the Fastran workflow, rather than in the nubeam wrapper.
— Reply to this email directly or view it on GitHub
https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-103975931.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491 — Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-104010687.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491
I'm available tomorrow (except 3:00 - 4:00), and off Friday and all of next week.
On 05/20/2015 03:50 PM, bernhold wrote:
I'm available this week and next. My calendar should be reasonably current. We need to include JM in this discussion, obviously, so probably an afternoon time slot.
On 5/20/2015 3:43 PM, batchelordb wrote:
When can we sit down and talk about this?
DBB
On May 20, 2015, at 2:32 PM, bernhold notifications@github.com wrote:
I think Wael made the salient point here. The wrapper should be designed to present a certain physics functionality (port). Any given physics code might be used in very different ways. I think those would be different ports, and I think it would be natural for them to have different wrappers.
However, from a code maintenance perspective, I can see that there might be some advantage to minimizing the number of wrappers that wrap a given physics code. The argument here is that when things change in the underlying physics code (i.e. they change their input or output formats), you're more likely to keep the wrapper(s) working if you have to make changes in a smaller number.
In the CCA world (from which the IPS component model is derived), it was certainly possible for a give chunk of code to expose multiple ports. Is it reasonable to work out how to do this in the IPS too, with the wrappers? Does it help the maintenance problem or hurt it?
On 05/20/2015 02:03 PM, David L Green wrote:
So how does this approach lend itself to the idea of modularity? i.e., someone comes along and wants to use genray for example in their workflow - all the wrappers for genray are then specialized, and a user would have to modify it to their own purpose?
Expecting a one-to-one mapping between wrappers and physics codes will lead to unnecessarily complex wrappers
What I'm proposing is reducing the complexity of the wrappers, not increasing them. Complexity is added to the component script that controls how the ports interact - what I'm calling the driver (generic_driver.py for example).
many (most?) of the physics code we deal with can be used to do different things based on their data and control inputs.
Yes, different codes do different things based on their input files. Of course. Template input files for different cases to provide the inputs not in the plasma state, and / or having the wrapper set switches in those input files to control the code functionality seems reasonable - this would certainly add some complexity to the wrapper scripts, but not much.
A corollary of this design is that different PORTS can be implemented using the same underlying physics code (I think in particular Genray has been used like that in some of Francesca's simulations).
This capability is workflow independent and would cover the use case of using say genray for multiple ports, i.e., just set the IC or EC switch in the config.
two TOQ components work even with totally different plasma state files.
I'm not sure I understand how two different TOQ components use totally different plasma state files? I assume you mean the components are not using the actual PPPL plasma state, but some other file format that is listed in the plasma-state files list?
I see no problem with having too many wrapper scripts, as long as it is clear they're not just duplicates of one another.
Having too many wrapper codes has already lead to difficulty in a new workflow developer from coming along and creating a new workflow by re-using existing components. Not only do they not know which component wrapper to use, very few of them have anyone to support them. I'd think that re-use of small amounts of code is far easier to support, than trying to support a different wrapper for every workflow.
What I'm looking for is a separation of logic and configuration, especially looking forward to creating IPS workflows from OMFit. It seems to me that they are mixed in the present wrappers, making if difficult to re-use code. How would building IPS workflows in OMFit work when there is a mixing of logic and config?
I'd think that the TSC / Fastran benchmark presents a clear use case. The approach has to been to, by hand, create two different sets of different input files to different wrappers for the same component codes that ideally do the same thing (if they didn't we would be benchmarking the two workflows). Surely using the same input files and the same component wrappers is a better way to do that - it certainly would have avoided many headaches? If not, what is a better way?
Another use case might be the way Fastran steady state averages over Nubeam results. I'd claim that the averaging operation belongs in the driver script for the Fastran workflow, rather than in the nubeam wrapper.
— Reply to this email directly or view it on GitHub
https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-103975931.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491 — Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub
https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-104010687.
David E. Bernholdt | Email: bernholdtde@ornl.gov Oak Ridge National Laboratory | Phone: +1 865-574-3147 http://www.csm.ornl.gov/~bernhold | Fax: +1 865-576-5491
— Reply to this email directly or view it on GitHub https://github.com/ORNL-Fusion/ips-atom/issues/22#issuecomment-104013465.
Wael R. Elwasif, PhD. Research Staff Member Computer Science Research Group Oak Ridge National Laboratory P.O. Box 2008, Bldg. 5600, MS 6016 Oak Ridge, TN 37831-6016
Office : (865)241-0002 Fax: (865)576-5491
I'm available this week and next except tomorrow morning in san diego time.
@ORNL-Fusion/ips-support-team
I'm stuck. Any pointers here would be great. I'm doing a TSC based run, but using the fastran genray and nubeam wrappers. The error is somehow related to the end of the nubeam step in the driver, but I can't seem to find how. Here is my run location ...
/project/projectdirs/atom/users/greendl1/diem_tsc_jm_error1
and the following errors ...