The base part of the grid_tools refactor. This takes the existing functionality and repackages it for code cleanliness and organization. Important points:
1. Compatibility
All previous tools are still available using the syntax from Starfish.grid_tools import *
I've also added a helper function for downloading the PHOENIX models. I added documentation for it, but its main purpose is for use in the test code.
2. Code is refactored into the folders like so-
Starfish/
grid_tools/
__init__.py # nothing in here (except import magic)
utils.py # chunking/wavelength ops
base_interfaces.py # The RawGridInterface, HDF5 Interfaces, and MasterFITS Interfaces
interfaces.py # The Phoenix, Kurucs, etc. interfaces
interpolators.py # The index and linear interpolators
instruments.py # The base Instrument and all subclasses
3. Increased consistency of grid interfaces
So, for the most part, I tried to make the code as plug-and-play in terms of the load_flux method across interfaces. The new standard is
def load_flux(params, header=False, norm=True)
I have documented this change and looked for conflicts within the API code.
4. Interpolator
I have gone through the interpolator and made it work. I have a pretty good idea of what is going on but I can't be 100% certain my code is accurate. I was able to do some simple regression tests by trying to see if using interpolators on a grid point would return the same values as the grid point flux. I changed the documents to say that it works (previously said it was broken and would not be fixed), but I added a warning along the lines of "degradation of models...no forward-propagation of errors" and recommended using the spectral emulator.
5. Tests
I have written unit tests for as much of the grid_tools code that I could. Some of the methods were hard to test, but I found some neat regressions that worked OK (like testing air_to_vacuum(vacuum_to_air(wavelengths)) == wavelengths). The tests pass Travis (as you should see here) and take about 100s to finish, depending on Travis caching.
Question
do we want to continue shipping the two Fits Classes from the grid_tools? I didn't write any unit tests for it and I don't think it really fits into the use cases we want to deliver.
Moving Forward
After this merge we could start working on some more API methods for convolution and etc. (a la #64 )
The base part of the grid_tools refactor. This takes the existing functionality and repackages it for code cleanliness and organization. Important points:
1. Compatibility
All previous tools are still available using the syntax
from Starfish.grid_tools import *
I've also added a helper function for downloading the PHOENIX models. I added documentation for it, but its main purpose is for use in the test code.2. Code is refactored into the folders like so-
3. Increased consistency of grid interfaces
So, for the most part, I tried to make the code as plug-and-play in terms of the
load_flux
method across interfaces. The new standard isI have documented this change and looked for conflicts within the API code.
4. Interpolator
I have gone through the interpolator and made it work. I have a pretty good idea of what is going on but I can't be 100% certain my code is accurate. I was able to do some simple regression tests by trying to see if using interpolators on a grid point would return the same values as the grid point flux. I changed the documents to say that it works (previously said it was broken and would not be fixed), but I added a warning along the lines of "degradation of models...no forward-propagation of errors" and recommended using the spectral emulator.
5. Tests
I have written unit tests for as much of the grid_tools code that I could. Some of the methods were hard to test, but I found some neat regressions that worked OK (like testing
air_to_vacuum(vacuum_to_air(wavelengths)) == wavelengths
). The tests pass Travis (as you should see here) and take about 100s to finish, depending on Travis caching.Question do we want to continue shipping the two Fits Classes from the grid_tools? I didn't write any unit tests for it and I don't think it really fits into the use cases we want to deliver.
Moving Forward After this merge we could start working on some more API methods for convolution and etc. (a la #64 )