Open astroferreira opened 4 years ago
hmm good question! what are the smoothing lengths used for? is it mainly in imaging? (like, in sph i think it has to do with the deposition into the octree)
I guess it is used for sampling the launch position of the photons for each particle instead of having just a point source (see Smoothed Particles here) I'm not sure how it translates to the grid generation.
This is especially important for generating images for halos with few particles. It is possible to achieve something similar by convolving the result with a kernel after the RT, but it is not easy to have a custom kernel for each source in the resulting image. This is basically the main difference I've been getting with my results with SKIRT and PD.
sorry about the delay!
so, is it the case that skirt
is smoothing the source of emission for an individual star to be over the region occupied by the smoothing length of the particle? (weighted by the kernel shape)?
if so, this i don't think is in hyperion
, though we could add this in pd
pretty straight forwardly I think. what we would do is basically just increase the number of sources by a factor N and distribute them randomly within the specified smoothing length.
the downside is that this would be very costly for simulations that have a ton of particles (since these would each represent new sources to shoot photons). this said, we could implement a flag such that this is only done if explicitly asked for in the parameters_master
file.
is this the sort of thing you were thinking, or am i misunderstanding the nature of the issue/question?
Yes, you are correct @dnarayanan. What you suggest sounds like a good solution given that this functionality is not present within Hyperion. This would work great for cutouts from EAGLE, Illustris, IllustrisTNG since each object would have a couple of thousand particles (sometimes less than thousand). And in the case of IllustrisTNG, the smoothing length used in skirt is directly available in the hdf5 files.
N should be proportional to the smoothing length
right? Maybe setting a parameter for the particle density within the smoothing length
volume. Also, would energy be conserved by only increasing the number of particles or is it necessary to change something else? This might be something that I could try implementing when I get more comfortable with the pd
code.
okay great! i'd be happy to help walk you through implementing this.
i think what you might want to poke around in is in powderday/source_creation/add_binned_seds
. in there, it bins the source in terms of metallicity and ages, and then adds point source collections via:
source = m.add_point_source_collection()
a bit further down, you both assign the positions and the luminosity:
pos = np.zeros([len(stars_in_bin[(wz,wa,wm)]),3])
#for i in range(len(stars_in_bin[(wz,wa,wm)])): pos[i,:] = stars_list[i].positions
for i in range(len(stars_in_bin[(wz,wa,wm)])):
pos[i,:] = stars_list[stars_in_bin[(wz,wa,wm)][i]].positions
source.position=pos
#source spectrum
source.spectrum = (nu,fnu)
totallum += np.sum(source.luminosity)```
i think what you'd be looking at is adjusting this to make it possible to increase the number of sources, and place them randomly within the kernel. ideally if this was a separte function that was called (i.e. not written directly into `add_binned_seds` then it could also be added for other types of sources (i.e. for not binned SEDs, but for directly added stars).
Sorry for the delay and thank you for the directions @dnarayanan.
I will check that out. While I was experimenting I found a problem that maybe can be sorted in the file you mention.
In star_list_gen
in powderday/SED_gen.py,
reg["stellarages"] shows more entries than reg["starmasses"] and reg["starcoordinates"]. As nstars is calculated based on stellarages, it breaks down when iterating over nstars in line 98/101 of the same file.
Up to now I have no clue why that happens, but I will try to look in add_binned_seds.
hey - very sorry about this! This is actually an issue we came across too last week and i forgot to update about this. i'm sort of bogged down in a grant this week but hope to get to it next week. in the mean time, i've moved the issue here:
https://github.com/dnarayanan/powderday/issues/93
as well as pointed to a yt
and pd
hash that both are known to work. i think this is a yt
issue because it smells a lot like an issue we had with octrees a month or two ago that was since fixed, though i need to develop a test problem to show it off more clearly.
i think fundamentally something errant is happening when doing a ds.region
in yt
Is it possible to set custom smoothing lengths without changing how YT loads the Arepo dataset? By default Arepo does not have smoothing lengths, so YT generates it for gas cells when loading it. However, some simulations like the IllustrisTNG have a smoothing length for SPH visualizations available for stellar particles in the StellarHsml field. They generally use it when generating images with SKIRT (like in this paper), which accepts it as a column in the stellar sources file.
Exploring the Hyperium docs I found that the
construct_octree
method accepts a sigma for a gaussian kernel (here), but I couldn't not find anything within PD code that could be used to set a custom Hsml.I understand that it might be fundamentally wrong to treat Arepo datasets as SPH-like, but it would be nice to be able to customize it without changing YT. Any suggestions on how to accomplish it within PD?