Closed wtbarnes closed 5 years ago
Looking briefly through this notebook I'm pretty much amazed. I'd be curious to know what performance was like (I suspect that there are areas where we could improve).
That notebook is very thorough and looks lovely and of course rigorous.
From the Intake point of view, it's not immediately obvious to me what you would want to do here. The input is FITS, and I have published an intake FITS driver (in intake-astro), which produces arrays (with WCS as an attribute) or tables. However, the data structure is more like Xarrays, with multiple variables and reference coordinates - except that the coordinate system is more complex than a regular grid. Furthermore, the code uses sunpy classes and functionality rather than typical (x)array stuff.
In short, you certainly could write a FITS->sunpy classes driver than contains some of the code in the notebook, and that might be very convenient for several datasets along these lines, to be able to reference them in a public catalogue. You could also write FITS->xarray that may be more generally useful outside of sun stuff, but then you also need to keep some ingest code separate. You could also have a combination of the two, a driver for FITS->(x)array and a second driver that takes those data-sets as inputs and creates the more specialised data containers from them.
As an aside, we talked about the possibility of more general coordinate systems in Xarray, but I don't know where that discussion got to. Of course, there is always a trade-of between the wish to have shared code and data representation frameworks, and the flexibility of more domain-specialised classes.
Wow, this is awesome! Thanks @wtbarnes! I will definitely have something cool to demo at the heliophysics workshop. I especially appreciate the detailed comments and explanations.
A couple points in terms of the notebook readability / useability. Note that this are just intended as nerdy tech-talk, not a criticism of your excellent contribution.
ds = intake.cat.ECCOv4r3.to_dask()
and obtain a full xarray dataset with many different variables / coordinates. (see this example notebook).rechunk
in your average routine? It seems like this would slow things down. 100x100 chunks are very small. Why not just use the full image? It still has to be read as a full image, so as far as I understand, you are not saving any I/O here (which usually is the main bottleneck)cc also @SimonKrughoff
@martindurant Thanks for all of the suggestions! Xarray would really be a game changer for dealing with solar data. As I think this notebook communicates, dealing with data one FITS file at a time is VERY limiting. Perhaps there should be some functionality in SunPy (or a SunPy affiliated package) for mapping many FITS files to an xarray dataset that would still have all of the functionality that the SunPy Map
object has. Presumably this could leverage all of the work you've already done on intake-astro. There is the ndcube package, developed by @DanRyanIrish, which aims to solve a similar problem. However, I don't know how easily this could be integrated with the Dask/xarray ecosystem.
The following comment is admittedly very biased, since I am an xarray core developer. So take it with a grain of salt.
What I have observed over the years is that multidimensional labelled arrays + metadata are ubiquitous in physical sciences. They have been implemented over and over by different domain-specific packages. Refactoring those packages around xarray would allow those package developers to focus more on their domain-specific problems. Plus xarray comes will full dask integration for free.
A great example of how a more domain specific package leverages xarray under the hood can be found in the satpy docs.
Refactoring is not trivial of course, but it can lead to big payoffs.
@rabernat I hope this is helpful to your talk. I'm happy to clarify anything that is not clear! I suspect many of the solar people in the audience will have at least heard of this timelag technique as it has become fairly popular in the field in the last few years. Of course, the general approach is useful for other analyses as well!
To address your other points:
Map
objects from a Dask array, but this is only available in the dev version currently (it will be available in the 1.0 release in a few months or so). Presumably, the SunPy master branch could just be pip-installed and this could be added to an conda env file. Where exactly would this environment be defined?@mrocklin Thanks! Dask makes everything so easy! What specifically do you mean by performance?
Ideally, SunPy (or ndcube or some other package) would have a standard way to deal with mapping multiple FITS files to a single array object in a way that allows for easy integration with Dask.
Or, perhaps even more ideal, sunpy can piggy-back on existing interfaces in xarray. How feasible that is, I don't really know, but I think it's a lofty goal.
(Very quick response to this)
The main blocker last time sunpy evaluated xarray was the need for having an object to calculate the two spatial coordinates. This is primarily because utilising existing libraries for the coupled map-projection spatial dimensions is practically the only way to go.
It's been a few years since I looked at this, but conversations with people at scipy seemed that this might be possible given enough work.
So is this PR good to go? If so, someone from @pangeo-data/pangeo-astro should merge.
@rabernat I'm happy to merge this, but I'm not sure how this gets pulled into the pangeo-astro
deploy. Is that automatic?
That depends on how you have your cluster set up. I don't know the answer.
If @Cadair is happy with this then I am as well.
@rabernat I'm happy to make any needed changes or clarifications before your talk/demo as well.
This lgtm from a quick read. I don't have the time to go over it in detail.
I'm happy to make any needed changes or clarifications before your talk/demo as well.
Right now the notebook is very long and complex for a demo, mostly because of the package management and the classes you define. Two totally optional things you could do to streamline would be the following:
utils.py
module that lives right next to the notebook, and then import them as needed. This is purely to reduce the length / complexity. @rabernat OK I'll try to make those fixes ASAP. The talk is Wednesday correct?
Correct.
On Mon, Nov 12, 2018 at 1:44 PM Will Barnes notifications@github.com wrote:
@rabernat https://github.com/rabernat OK I'll try to make those fixes ASAP. The talk is Wednesday correct?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pangeo-data/pangeo-astro-examples/pull/1#issuecomment-437987703, or mute the thread https://github.com/notifications/unsubscribe-auth/ABJFJuRblNsXzbwTHqse-rj8ZySusSBXks5uucFwgaJpZM4YVz45 .
p.s. THANK YOU so much for doing this!
That depends on how you have your cluster set up. I don't know the answer.
OK, so maybe a question for @NicWayand or @dsludwig. Apologies, but I haven't had a chance to wrap my head around how all this hangs together yet.
If a CiricleCI job is set up for astro.pangeo.io (per @dsludwig's guide), then you need to create a new PR to https://github.com/pangeo-data/astro.pangeo.io-deploy (staging branch likely), and a new image will automatically be created that astro.pangeo.io will load.
Yes. That all works. What I didn't know was if these examples get pulled into that deployment.
Ah ok, sorry I don't know that.
These examples are not setup to be pulled into that deployment. I can look into doing that, or you could copy this example into the image directory here: https://github.com/pangeo-data/astro.pangeo.io-deploy/tree/staging/deployments/astro.pangeo.io/image That's where the current examples are loaded into the astro.pangeo.io image.
OK. Thanks a lot. I guess I'd also need to diff the environments as well to see if any new packages need to be added.
Where is the best place to specify the packages to be installed when astro.pangeo.io spins up? Or is it best to just create a new environment?
You can specify new packages here: https://github.com/pangeo-data/astro.pangeo.io-deploy/blob/staging/deployments/astro.pangeo.io/image/binder/environment.yml
On Tue, Nov 13, 2018 at 12:30 PM Will Barnes notifications@github.com wrote:
Where is the best place to specify the packages to be installed when astro.pangeo.io spins up? Or is it best to just create a new environment?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pangeo-data/pangeo-astro-examples/pull/1#issuecomment-438425295, or mute the thread https://github.com/notifications/unsubscribe-auth/AAwCVzlh1AsxzSWEayxbcsmrvnF_Ijc3ks5uuyvRgaJpZM4YVz45 .
In the short term, I would focus on making sure the binder works, rather than astro.pangeo.io.
I will have to fork this in order to get it ready for tomorrow.
Thanks @dsludwig !
EDIT: looks like that is already there
@rabernat Ok so I should just be able to add
- git+https://github.com/sunpy/sunpy
under the pip
section of pangeo-astro-examples/binder/environment.yml?
I've now pulled out the AIA cube code from the notebook into a separate file, util.py
. However, when I try to read in the metadata from many files using client.map
, I get a "No module named 'util'" exception.
openfiles = dask.bytes.open_files('gcs://pangeo-data/SDO_AIA_Images/diffrot/171/*.fits')
futures = client.map(get_header, openfiles)
where get_header
is a function that I've imported into the notebook from util.py
.
I don't know whether this would be a problem when launching with binder as well.
The cwd of the dask workers is not necessarily the same as your ipython kernel, it would be best to set a PYTHONPATH or otherwise "install" the file. You should be able to find out with client.run(os.getcwd)
to find out.
Hmm client.run(os.getcwd)
just shows the home directory, but even moving util.py
into home still gives a module not found.
You would need util in both maybe, or explicitly set the working directory? Otherwise, I don't have any other immediate suggestion for you :|
@rabernat I'm having some difficulties with pulling out the data structures into a separate file and properly resolving the paths on the workers. In the interest of time, I'm just including all of the code inline. This could either be placed at the top or the bottom of the notebook to "hide" it during the demo. Sorry this is not ideal.
No worries. This is great as is.
I went through this as is and was able to get it working by putting the class / function definitions back into the notebook. What @martindurant was missing is the fact that the dask workers can't see your home directly. There is no shared filesystem here. So the only way to get the module to the workers is to actually install it into their environment using pip / conda. Since there is no package for your util
module, this is not possible.
@wtbarnes were you planning to make a new commit? I screwed up and merged this before you actually updated. Any chance you could push the corrected, working notebook?
@rabernat OK. At some point, this functionality will hopefully will live in an installable package.
I can push a fix now. Sorry this is coming so late! I'm at another meeting right now and on mountain time!
@rabernat , of course! Must have been late in the day...
Regarding pangeo-data/astro.pangeo.io-deploy#15
@Cadair can you take a look at this and let me know if you have any suggestions?
@rabernat @martindurant I'd also be interested to see how any of the data viz could be improved with holoviews or the data import simplified with intake.