Closed tbody-cfs closed 2 months ago
@IsaacSavona Could you please check if this merge request matches your original implementation?
I've marked a few points where I had questions with TODO
: please clear these.
Also, the Jupyter notebook needs to be ported to the new style. Could you do this?
@IsaacSavona Could you please check if this merge request matches your original implementation?
I've marked a few points where I had questions with
TODO
: please clear these.Also, the Jupyter notebook needs to be ported to the new style. Could you do this?
I am looking into this now. The only thing I immediately see missing from the commit is the algorithm files, but I guess you have modified this to be Algorithm.register_algorithm structure.
@IsaacSavona Could you please check if this merge request matches your original implementation?
I've marked a few points where I had questions with
TODO
: please clear these.Also, the Jupyter notebook needs to be ported to the new style. Could you do this?
I looked at the TODO's everything seems good:
I will look into porting the Jupyter notebook to the new style
@IsaacSavona Could you please check if this merge request matches your original implementation?
I've marked a few points where I had questions with
TODO
: please clear these.Also, the Jupyter notebook needs to be ported to the new style. Could you do this?
Hi guys,
Having some trouble porting the new changes to my notebook. I have tried running getting_started.ipynb
to no avail. I get an error at algorithm.update_dataset(dataset)
.
Running algorithm.update_dataset(dataset)
It is too long to warrant pasting the entire thing, but I have attached the very end below. It seems like it has something to do with the key-value pairs of the impurities...
...
File [~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119), in Algorithm.update_dataset(self, dataset, allow_overwrite)
[114](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:114) sorted_default_keys = ", ".join(sorted(self.default_keys))
[115](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:115) raise KeyError(
[116](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:116) f"KeyError for {self._name}: Key '{key}' not in dataset keys [{sorted_dataset_keys}] or default values [{sorted_default_keys}]"
[117](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:117) )
--> [119](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119) result = self._function(**input_values)
[120](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:120) return xr.Dataset(result).merge(dataset, join="left", compat=("override" if allow_overwrite else "no_conflicts"))
...
--> [304](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/formulas/read_atomic_data.py:304) max_temp, min_temp, max_density, min_density = self.grid_limits[species]
[305](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/formulas/read_atomic_data.py:305) electron_temp = np.minimum(electron_temp, max_temp)
[306](https://file+.vscode-resource.vscode-cdn.net/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/docs/doc_sources/~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/formulas/read_atomic_data.py:306) electron_temp = np.maximum(electron_temp, min_temp)
KeyError: <AtomicSpecies.Helium: 2>
Hi Isaac,
Can you run poetry install poetry run radas ?
It looks like you’re missing the atomic data files.
Cheers,
Tom
Thomas Body
Scientist, Boundary and Divertor Physics
Commonwealth Fusion Systems https://cfs.energy/
he/him/his
On Mon, 27 May 2024 at 10:14 PM, IsaacSavona @.***> wrote:
@IsaacSavona https://github.com/IsaacSavona Could you please check if this merge request matches your original implementation?
I've marked a few points where I had questions with TODO: please clear these.
Also, the Jupyter notebook needs to be ported to the new style. Could you do this?
Hi guys,
Having some trouble porting the new changes to my notebook. I have tried running getting_started.ipynb to no avail. I get an error at algorithm.update_dataset(dataset). Issue
Running algorithm.update_dataset(dataset) Error
It is too long to warrant pasting the entire thing, but I have attached the very end below. It seems like it has something to do with the key-value pairs of the impurities...
... File ~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119, in Algorithm.update_dataset(self, dataset, allow_overwrite) 114 sorted_default_keys = ", ".join(sorted(self.default_keys)) 115 raise KeyError( 116 f"KeyError for {self._name}: Key '{key}' not in dataset keys [{sorted_dataset_keys}] or default values [{sorted_default_keys}]" 117 ) --> 119 result = self._function(**input_values) 120 return xr.Dataset(result).merge(dataset, join="left", compat=("override" if allow_overwrite else "no_conflicts")) ... --> 304 max_temp, min_temp, max_density, min_density = self.grid_limits[species] 305 electron_temp = np.minimum(electron_temp, max_temp) 306 electron_temp = np.maximum(electron_temp, min_temp)
KeyError: <AtomicSpecies.Helium: 2>
— Reply to this email directly, view it on GitHub https://github.com/cfs-energy/cfspopcon/pull/57#issuecomment-2134006957, or unsubscribe https://github.com/notifications/unsubscribe-auth/A32XUMNRBEWHBUCQUCCCLETZEOH2RAVCNFSM6AAAAABH6O7Z3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZUGAYDMOJVG4 . You are receiving this because you authored the thread.Message ID: @.***>
Hi Isaac, Can you run poetry install poetry run radas ? It looks like you’re missing the atomic data files. Cheers, Tom https://cfs.energy/ Thomas Body Scientist, Boundary and Divertor Physics Commonwealth Fusion Systems https://cfs.energy/ he/him/his … On Mon, 27 May 2024 at 10:14 PM, IsaacSavona @.*> wrote: @IsaacSavona https://github.com/IsaacSavona Could you please check if this merge request matches your original implementation? I've marked a few points where I had questions with TODO: please clear these. Also, the Jupyter notebook needs to be ported to the new style. Could you do this? Hi guys, Having some trouble porting the new changes to my notebook. I have tried running getting_started.ipynb to no avail. I get an error at algorithm.update_dataset(dataset). Issue Running algorithm.update_dataset(dataset) Error It is too long to warrant pasting the entire thing, but I have attached the very end below. It seems like it has something to do with the key-value pairs of the impurities... ... File ~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119, in Algorithm.update_dataset(self, dataset, allow_overwrite) 114 sorted_default_keys = ", ".join(sorted(self.default_keys)) 115 raise KeyError( 116 f"KeyError for {self._name}: Key '{key}' not in dataset keys [{sorted_dataset_keys}] or default values [{sorted_default_keys}]" 117 ) --> 119 result = self._function(*input_values) 120 return xr.Dataset(result).merge(dataset, join="left", compat=("override" if allow_overwrite else "no_conflicts")) ... --> 304 max_temp, min_temp, max_density, min_density = self.grid_limits[species] 305 electron_temp = np.minimum(electron_temp, max_temp) 306 electron_temp = np.maximum(electron_temp, min_temp) KeyError: <AtomicSpecies.Helium: 2> — Reply to this email directly, view it on GitHub <#57 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A32XUMNRBEWHBUCQUCCCLETZEOH2RAVCNFSM6AAAAABH6O7Z3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZUGAYDMOJVG4 . You are receiving this because you authored the thread.Message ID: @.>
Hi Isaac, Can you run poetry install poetry run radas ? It looks like you’re missing the atomic data files. Cheers, Tom https://cfs.energy/ Thomas Body Scientist, Boundary and Divertor Physics Commonwealth Fusion Systems https://cfs.energy/ he/him/his … On Mon, 27 May 2024 at 10:14 PM, IsaacSavona @.*> wrote: @IsaacSavona https://github.com/IsaacSavona Could you please check if this merge request matches your original implementation? I've marked a few points where I had questions with TODO: please clear these. Also, the Jupyter notebook needs to be ported to the new style. Could you do this? Hi guys, Having some trouble porting the new changes to my notebook. I have tried running getting_started.ipynb to no avail. I get an error at algorithm.update_dataset(dataset). Issue Running algorithm.update_dataset(dataset) Error It is too long to warrant pasting the entire thing, but I have attached the very end below. It seems like it has something to do with the key-value pairs of the impurities... ... File ~/Documents/CFS/cfspopcon24/cfspopcon/cfspopcon/algorithm_class.py:119, in Algorithm.update_dataset(self, dataset, allow_overwrite) 114 sorted_default_keys = ", ".join(sorted(self.default_keys)) 115 raise KeyError( 116 f"KeyError for {self._name}: Key '{key}' not in dataset keys [{sorted_dataset_keys}] or default values [{sorted_default_keys}]" 117 ) --> 119 result = self._function(*input_values) 120 return xr.Dataset(result).merge(dataset, join="left", compat=("override" if allow_overwrite else "no_conflicts")) ... --> 304 max_temp, min_temp, max_density, min_density = self.grid_limits[species] 305 electron_temp = np.minimum(electron_temp, max_temp) 306 electron_temp = np.maximum(electron_temp, min_temp) KeyError: <AtomicSpecies.Helium: 2> — Reply to this email directly, view it on GitHub <#57 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A32XUMNRBEWHBUCQUCCCLETZEOH2RAVCNFSM6AAAAABH6O7Z3KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZUGAYDMOJVG4 . You are receiving this because you authored the thread.Message ID: @.>
Hi Tom, I have already done these things...
When I do poetry run radas
it freezes at…
...
CalledProcessError(1, ['python3', '-m', 'numpy.f2py', '-c', '/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/radas_dir/readers/fortran_file_handling.f90', '-m', 'fortran_file_handling'])
> /Users/isaacsavona/anaconda3/lib/python3.9/subprocess.py(528)run()
527 if check and retcode:
--> 528 raise CalledProcessError(retcode, process.args,
529 output=stdout, stderr=stderr)
Hey @tbody-cfs! I am trying to keep up with your changes. I have updated my notebook and have two questions (there is a preamble to each but the main questions are bolded):
SPARC_PRD_results.nc
file in order to pull the changes you have made here. When I run python generate_regression_results.py
within the regression results directory it gives me the error FileNotFoundError: atomic_data_directory (/Users/isaacsavona/Documents/CFS/cfspopcon24/cfspopcon/tests/regression_results/radas_dir) does not exist
which is true because the /radas_dir
exists just under /cfspopcon
. How do I properly keep my SPARC_PRD_results.nc
file up to date?dataset
object which comes from the input.yaml
in my notebook I do not want to have to worry about figuring out all the dependent parameters which I need to update with algorithms (because there are many). Instead, I just want to run all the algorithms to ensure the whole dataset is updated. When I run dataset = algorithm.update_dataset(dataset)
after modifying the parameters of the dataset it throws an error about the dataset not being hashable. How can I update the whole dataset
in one go once I have made changes to it?Hi @IsaacSavona,
To update the regression results, you've got the correct command poetry run python tests/regression_results/generate_regression_results.py
. It might be that you're running this from the wrong directory, however — you need to run it from the top-level directory (the one containing the tests
folder).
However, for this merge request you shouldn't need to update the regression results yourself.
For updating the dataset yourself, I'm not sure if I understand the error that you're encountering. The following shows how you'd calculate all of the terms to run your flux consumption model.
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from cfspopcon import Algorithm, named_options
from cfspopcon.unit_handling import ureg
algorithms = [
"calc_minor_radius_from_inverse_aspect_ratio",
"calc_elongation_at_psi95_from_areal_elongation",
"calc_average_ion_temp_from_temperature_ratio",
"calc_f_shaping_for_qstar",
"calc_q_star_from_plasma_current",
"calc_beta_toroidal",
"calc_beta_poloidal",
"calc_effective_collisionality",
"calc_ion_density_peaking",
"calc_electron_density_peaking",
"calc_bootstrap_fraction",
"calc_inductive_plasma_current",
"calc_Spitzer_loop_resistivity",
"calc_resistivity_trapped_enhancement",
"calc_neoclassical_loop_resistivity",
"calc_loop_voltage",
"calc_cylindrical_edge_safety_factor",
"calc_internal_inductivity",
"calc_internal_inductance_for_cylindrical",
"calc_external_inductance",
"calc_vertical_field_mutual_inductance",
"calc_invmu_0_dLedR",
"calc_vertical_magnetic_field",
"calc_internal_flux",
"calc_external_flux",
"calc_resistive_flux",
"calc_poloidal_field_flux",
"calc_flux_needed_from_solenoid_over_rampup",
"calc_max_flattop_duration",
"calc_breakdown_flux_consumption",
]
dataset = xr.Dataset(
data_vars=dict(
major_radius = 1.85 * ureg.m,
areal_elongation = 1.75,
triangularity_psi95 = 0.3,
magnetic_field_on_axis = 12.2 * ureg.T,
plasma_current = 8.7 * ureg.MA,
inverse_aspect_ratio = 0.57 / 1.85,
elongation_ratio_areal_to_psi95 = 1.025,
average_electron_density = 2.5 * ureg.n20,
average_electron_temp = 9.0 * ureg.keV,
ion_to_electron_temp_ratio = 1.0,
surface_inductance_coefficients = named_options.SurfaceInductanceCoeffs.Barr,
total_flux_available_from_CS = 35.0 * ureg.Wb,
ejima_coefficient = 0.6,
z_effective = 1.5,
electron_density_peaking_offset = -0.1,
ion_density_peaking_offset = -0.2,
temperature_peaking = 2.5,
dilution = 0.85,
)
)
for algorithm in algorithms:
alg = Algorithm.get_algorithm(algorithm)
dataset = alg.update_dataset(dataset)
print(dataset["max_flattop_duration"])
Could you pull the latest changes, update the demo notebook and then push your changes. Then, we'll merge this in.
Hi @IsaacSavona,
To update the regression results, you've got the correct command
poetry run python tests/regression_results/generate_regression_results.py
. It might be that you're running this from the wrong directory, however — you need to run it from the top-level directory (the one containing thetests
folder). However, for this merge request you shouldn't need to update the regression results yourself.For updating the dataset yourself, I'm not sure if I understand the error that you're encountering. The following shows how you'd calculate all of the terms to run your flux consumption model.
import xarray as xr import numpy as np import matplotlib.pyplot as plt from cfspopcon import Algorithm, named_options from cfspopcon.unit_handling import ureg algorithms = [ "calc_minor_radius_from_inverse_aspect_ratio", "calc_elongation_at_psi95_from_areal_elongation", "calc_average_ion_temp_from_temperature_ratio", "calc_f_shaping_for_qstar", "calc_q_star_from_plasma_current", "calc_beta_toroidal", "calc_beta_poloidal", "calc_effective_collisionality", "calc_ion_density_peaking", "calc_electron_density_peaking", "calc_bootstrap_fraction", "calc_inductive_plasma_current", "calc_Spitzer_loop_resistivity", "calc_resistivity_trapped_enhancement", "calc_neoclassical_loop_resistivity", "calc_loop_voltage", "calc_cylindrical_edge_safety_factor", "calc_internal_inductivity", "calc_internal_inductance_for_cylindrical", "calc_external_inductance", "calc_vertical_field_mutual_inductance", "calc_invmu_0_dLedR", "calc_vertical_magnetic_field", "calc_internal_flux", "calc_external_flux", "calc_resistive_flux", "calc_poloidal_field_flux", "calc_flux_needed_from_solenoid_over_rampup", "calc_max_flattop_duration", "calc_breakdown_flux_consumption", ] dataset = xr.Dataset( data_vars=dict( major_radius = 1.85 * ureg.m, areal_elongation = 1.75, triangularity_psi95 = 0.3, magnetic_field_on_axis = 12.2 * ureg.T, plasma_current = 8.7 * ureg.MA, inverse_aspect_ratio = 0.57 / 1.85, elongation_ratio_areal_to_psi95 = 1.025, average_electron_density = 2.5 * ureg.n20, average_electron_temp = 9.0 * ureg.keV, ion_to_electron_temp_ratio = 1.0, surface_inductance_coefficients = named_options.SurfaceInductanceCoeffs.Barr, total_flux_available_from_CS = 35.0 * ureg.Wb, ejima_coefficient = 0.6, z_effective = 1.5, electron_density_peaking_offset = -0.1, ion_density_peaking_offset = -0.2, temperature_peaking = 2.5, dilution = 0.85, ) ) for algorithm in algorithms: alg = Algorithm.get_algorithm(algorithm) dataset = alg.update_dataset(dataset) print(dataset["max_flattop_duration"])
Could you pull the latest changes, update the demo notebook and then push your changes. Then, we'll merge this in.
I am getting on this now
Replaces #4
Added inductance functionality and flux functionality to POPCON for time-dependent and time-independent calculations.
Compared to #4
wraps_ufunc
Algorithm.register_algorithm
methodKeeps the same test suite as the original merge request.
N.b. All physics in this merge request was implemented by I. Savona