dominic-chang / Krang.jl

Fast analytic raytracing around Kerr black holes
https://dominic-chang.com/Krang.jl/
MIT License
6 stars 4 forks source link

docs/examples suggestions #20

Closed ExpandingMan closed 2 weeks ago

ExpandingMan commented 1 month ago

I've been taking a look at this package as part of #7273 and have a few suggestions for things that I feel could benefit the documentation and examples.

For the most part, I think the source code is very clear (especially when compared to the referenced paper that gives the geodesic solutions) and not in need of a lot of extra exposition, but here are a few minor things I felt could use some improvement.

Type System

I think it would be nice to add a brief section, for example somewhere in here describing how the various types defined in the package are used. For example, it at first looks a little bit strange that in some cases raytracing is done by using the CoordinatePoint material, which apparently indicates that it should return a coordinate instead of intensity (it at first looks a bit strange that CoordinatePoint is an AbstractMaterial). Perhaps also a brief mention of the various cameras, specifically what they can be used to compute, and that they are always located on the surface at infinity would be useful.

Mesh Embeddings

It may be partially a result of my own ignorance concerning typical astrophysics applications, but at least from a general theoretical GR standpoint it is not at all obvious how you propose to embed imported meshes into the Kerr spacetime. From looking at the code, it seems to me that you do this by identifying spherical polar coordinates in Euclidean space with the corresponding Boyer-Lindquist coordinates which (again, from an abstract geometric point of view) is not a particularly obvious thing to do (though I suppose it only starts to get real weird inside the horizon). I think it would be good to explain this in some detail in the docs.

Example Docs

In some cases I think it would be nice to be a little more verbose about what exactly is happening in the examples, particularly when there are large blocks of code uninterrupted by an explanation. Some of this relates to the things I just mentioned above, in others I think it would be good to link to the theory sections of your own docs or to the original references (for example when ElectronSynchrotronPowerLawPolarization is introduced it might be nice to reference that paper right in the example so anyone who's interested can immediately find out exactly what that is). Another example is in the neural network example where you are fitting to a random field generated by the randomly initialized network: in retrospect it's obvious what's being done, but it looked strange at first glance, since it'd only really make sense to do that for pedagogical reasons.

GPU Example

I think if you are going to mention GPU's right in the landing page it would be a really good idea to show another short example that uses a GPU explicitly. From what I can tell so far, it does indeed seem like the vast majority of this package should have no trouble running in GPU kernels, but we all know that GPU's are a gigantic pain in the ass and there will inevitably be a lot of "gotchas" and "you're holding it wrong" moments trying to get something to run with a GPU. This is doubly true for anyone who is not already an experienced Julia user (and an experienced Julia GPU user, at that). I realize that an example will only be able to cover one or two very narrow use cases, but, again, I feel there should be one if GPU is advertised.


Anyway, thanks for your work on this package!

dominic-chang commented 1 month ago

Thanks so much. These are actually great suggestions!

Type System I think it would be nice to add a brief section, for example somewhere in here describing how the various types defined in the package are used. For example, it at first looks a little bit strange that in some cases raytracing is done by using the CoordinatePoint material, which apparently indicates that it should return a coordinate instead of intensity (it at first looks a bit strange that CoordinatePoint is an AbstractMaterial). Perhaps also a brief mention of the various cameras, specifically what they can be used to compute, and that they are always located on the surface at infinity would be useful.

Yes, I think this is a good point, especially since some of those types have different optimizations about them even if they can be interchanged. The CoordinatePoint material is admittedly kind of strange. I was trying to figure out how to plot coordinates while keeping to the philosophy of the code to, 'raytrace meshes made from materials that are painted onto geometries'. Perhaps I don't have to be that strict about this and I could emphasize to the reader that these are internal functions to return coordinate locations.

Mesh Embeddings It may be partially a result of my own ignorance concerning typical astrophysics applications, but at least from a general theoretical GR standpoint it is not at all obvious how you propose to embed imported meshes into the Kerr spacetime. From looking at the code, it seems to me that you do this by identifying spherical polar coordinates in Euclidean space with the corresponding Boyer-Lindquist coordinates which (again, from an abstract geometric point of view) is not a particularly obvious thing to do (though I suppose it only starts to get real weird inside the horizon). I think it would be good to explain this in some detail in the docs.

Yes, I would agree that the method for embedding that I have chosen is probably not the most natural choice since it's coordinate dependent. I think the most natural choice would to probably embed the mesh such that it's Gaussian curvature is preserved. This feature was added because it was something I was often using for benchmarking/testing, and thought that it might be useful for someone else in the future. Maybe I can add some text explaining what's done and the caveats about it.

Example Docs In some cases I think it would be nice to be a little more verbose about what exactly is happening in the examples, particularly when there are large blocks of code uninterrupted by an explanation. Some of this relates to the things I just mentioned above, in others I think it would be good to link to the theory sections of your own docs or to the original references (for example when ElectronSynchrotronPowerLawPolarization is introduced it might be nice to reference that paper right in the example so anyone who's interested can immediately find out exactly what that is). Another example is in the neural network example where you are fitting to a random field generated by the randomly initialized network: in retrospect it's obvious what's being done, but it looked strange at first glance, since it'd only really make sense to do that for pedagogical reasons.

Yes, this is a great suggestion. I did want to add a more realistic neural network fitting example to the docs, but it would have required extra dependencies for handling actual data. I chose instead to keep it pedagogical. I'll highlight this.

GPU Example I think if you are going to mention GPU's right in the landing page it would be a really good idea to show another short example that uses a GPU explicitly.

I think this is a good idea but am not sure how to handle it. I never added any GPU examples because I didn't have access to a GPU machine with github CI. I think I can likely still add an example, but the docs will not be generated dynamically. Maybe that's the best approach for now?

ExpandingMan commented 1 month ago

For the mesh embeddings, like I said, I'm more a theory person and my practical astrophysics knowledge is sketchy, so I was wondering if perhaps this choice is motivated by it? I guess it's probably a convenient way to talk about accretion disks and jets and stuff, so I figure it's more practically useful than some fancy embedding that preserves some abstract quantity. Any embedding you can choose is going to be arbitrary. Again, nothing wrong with it, just in need of an explanation

As for GPU, I think it's fine if you don't test it in CI/CD if it's in example or docs, that's typically what people do. If you can come up with an example I can help you verify it (at least for CUDA). In particular I think it would be great to highlight if there are any functions here that can immediately benefit from being run on a GPUArray, and if not, what can people do. I'm confident most of the stuff in this package will work fine in principle, but GPU's are so fiddly and annoying that even with a fair amount of experience of GPU's in Julia I'm not even a priori sure what the right approach for this package would be.

dominic-chang commented 1 month ago

I think I should have address all of these in the most recent version of the docs

alejandroc137 commented 1 month ago

I just started to go over the code. For now, just a couple of comments on the examples/documentation:

I run https://dominic-chang.com/Krang.jl/v0.2.0/examples/coordinate_example and received this error when running

material = Krang.CoordinatePoint();

It returns this:

UndefVarError: `CoordinatePoint` not defined

Stacktrace:
 [1] getproperty(x::Module, f::Symbol)
   @ Base ./Base.jl:31
 [2] top-level scope
   @ In[5]:2

Is this fixed in the latest version (I installed it from the Pkg mode)?

dominic-chang commented 1 month ago

Thanks for these comments

  • Since the examples make heavy use of CairoMakie, I think it should be added as a dependence.

CairoMakie is quite a heavy package and is really only used in the examples for plotting. Adding it as a dependency would increase the package pre-compilation time by quite a bit, for not much benefit, especially since different users may use different plotting packages. I do think it's a good idea to add plotting recipes for Makie and Plots, but this will likely be done through a package extension, meaning that users would still need to install those packages separately.

  • I think it is worth adding some notebooks instead of only scripts, at least for the most basic examples. This will allow for more comments and embedded figures that give a sense of what is going on without even running the code.

Sure, this is a good idea. I can try and transcribe the examples into notebooks for usage. I think this also implies that I should add more text explaining what's going on in the examples on the page. Please feel free to point out anything that you think is particularly obtuse.

  • The documentation has several typos (e.g., blackhole -> black hole, assymptotic-> asymptotic, etc). It may be worth putting the text on a word processor to check it.

Thanks for pointing this out. I ran the site through a spell check and I believe I've gotten everything now.

I run https://dominic-chang.com/Krang.jl/v0.2.0/examples/coordinate_example and received this error when running material = Krang.CoordinatePoint();

I removed this material in the last version. This should be fixed in the latest docs (https://dominic-chang.com/Krang.jl/stable/examples/coordinate-example). I would also run up in pkg mode to make sure your on the latest version.

ExpandingMan commented 1 month ago

Just want to chime in and say that it is definitely not standard practice in Julia to add plotting packages like CairoMakie as dependencies. Indeed as Dominic says, they are quite "heavy", not to mention they can potentially add version caps in the environment. It's very easy to install as a separate package and use in an example.

alejandroc137 commented 1 month ago

Thank you both (@dominic-chang and @ExpandingMan) for the clarification. I think it is still worth clearly mentioning the dependencies all the examples need (e.g., CairoMakie or GLMakie), as that is indeed a standard practice.

sze = 400; # Will always the screen be a square of size \sqrt(sze)? rmin = Krang.horizon(metric) rmax = 10.0; ρmax = 15.0; # Is this something related to the observer's screen? How does this related to sze?

Since these appear in most cases and are very common, you may want to do it somewhere in a very general way. For example, in the "getting started," it can be nice to add the most general parameters and their meaning.

(From: examples/mino-time-example)

geometries = (Krang.ConeGeometry(θs, (i, rmin, rmax)) for i in 0:2)

as opposed to

emission_coordinates!(coordinates, camera, τ)

time = coordinates[1,:,:]
radius = coordinates[2,:,:]
inclination = coordinates[3,:,:]
azimuth = mod2pi.(coordinates[4,:,:])

I understand the quantities are different here, but it would be beneficial to make them more explicit or even comment on them more.

dominic-chang commented 1 month ago

Thank you both (@dominic-chang and @ExpandingMan) for the clarification. I think it is still worth clearly mentioning the dependencies all the examples need (e.g., CairoMakie or GLMakie), as that is indeed a standard practice.

I see your point, but I don't think this is standard practice. I think it is highly discouraged by the community to include heavy packages as dependencies in general. I believe this is the reason why 'Base' packages and 'PkgExtensions' exist in julia ecosystem. Here's an example thread on the julia discourse where it is strongly advised to not include plotting packages as dependencies. Admittedly, this was a few years ago, but I think the guidelines are still relevant.

  • I indeed think you may want to comment a little bit more on the examples. Going back to the coordinate example, e.g., small things like just commenting on these variables will be helpful (which you are doing only in the second example):

Thanks for that suggestion. I have added some extra comments to the examples and additional doc strings. The docstrings of the constructors should mention the bounds for which the types are defined.

  • I think it would be very useful if you also provided an explicit example of how to export the results of the raytracing and other results. Users may want to export, for example, the redshift factors or even the geometry. I mention this because, from the examples, it was not clear to me how to get the photon ring contributions and how data is stored when doing so,

I'm sorry. I'm not sure I understand this point. Do you mean to include an example that saves ray-traced information to disk?

  • I could not completely understand the example "Raytracing a polygon mesh." Could you please add a few sentences to introduce that example and comment it more? What do you mean by "Raytrace the mesh embedded in the Kerr space time with additive blending"?

I've updated the example with text and added an animation that will hopefully better explain what's happening.

Changes have been made to the dev docs

alejandroc137 commented 1 month ago

I do not have more comments.

dominic-chang commented 1 month ago

Appologies for the misunderstanding. Those things I can definitely do. Your comments have been greatly appreciated 😀

dominic-chang commented 1 month ago

@alejandroc137 I'm still working on implementing all of your suggestions. Do you have any specific ideas for interfacing with Krang with external packages in mind? If not, I can just write an example that exports images to some possibly space efficient format.

alejandroc137 commented 1 month ago

I was not thinking about images. My comment was more about the idea that since Krang is a raytracing code, there should be a simple build-in routine to get/store the raytracing results. So, I was thinking of the following, which will be a minimal extension of what appears in the example coordinate_example, that instead of plotting the information, it saves the output.

So, for a given geometry, say a double cone, ConeGeometry, one can get the raytracing information for each pixel in the image plane. That is, for each pixel location (\alpha, \beta, in Bardeen coordinates), one gets the information of where the rays landed in that specific geometry, e.g.,:

\alpha, \beta -> r_n, t_n, theta_n, phi_n

Where the sub-label n relates to the photon ring index, With that information, people can then produce images, study geometries, describe a fluid on that geometry, get redshifts, etc. As I mentioned before, it was not very clear to me the best way to do it with the current code, as it seems to be different from using:

Krang.ConeGeometry(θs, (i, rmin, rmax)) for i in 0:2)

or

emission_coordinates!(coordinates, camera, τ)

dominic-chang commented 1 month ago

I'm not really understanding what your intent was with the code examples you provided. If I understand what you mean, then your saying that I should, for example, save an array of coordinates to disk for the sub images generated from raytracing some geometry?

I think separately, I should also add an example of defining custom materials to be painted onto the geometries, and exporting that information as well. For example, a power law emission profile that returns pixel based intensities and redshifts which can then be exported to disk

dominic-chang commented 1 month ago
  • I think it is worth adding some notebooks instead of only scripts, at least for the most basic examples. This will allow for more comments and embedded figures that give a sense of what is going on without even running the code.

I added some notebook examples of the scripts. Some of the media files are apparently too large to be rendered in the notebook preview on github, so I'm a bit worried about the extra overhead that is added to the git history. The information should match the markdown that's generated in the docs.