lonanni / iMaNGA

A repository of the codes used in Nanni et. al (2022) & Nanni et. al (2023) to produce iMaNGA galaxies
7 stars 0 forks source link

Some possible errors in the TNG50_GalaxyFiles.ipynb notebook and the variable that corresponds to the synthetic data cube #1

Open akshattripathi opened 5 months ago

akshattripathi commented 5 months ago

Hello,

I'm running the code in file _iMaStarExample.ipynb (which, as far as I understand, takes a galaxy in TNG50 with its snap and gal numbers and then spits out a synthetic data cube). After changing a few things (like unzipping the Mappings.tar.gz file and then putting it in the main directory, changing the version number from 0.2 to 1.1 in the _iMaStarfunctions.py file, putting the _MaStar_SSPv1.1.fits file in the correct directory and upgrading ppxf on my computer), I was able to run the code successfully. I have a few questions and I would greatly appreciate it if you can answer them for me.

It seems as if the file _snap[snap_num]gal[galnum].dat is necessary to run the notebook _iMaStarExample.ipynb all the way through successfully. This file seems available only for one galaxy on Github. For the rest of the galaxies, I will be using the _TNG50GalaxyFiles.ipynb notebook to create the _snap[snap_num]gal[galnum].dat. For that file, I ran into some issues and wanted to see if what I'm doing makes sense:

1) In the _TNG50GalaxyFiles.ipynb notebook, is "a in" SFT just the scale factor? This is set when the different properties of the star particles are being defined in the block of code:

coordinate = data[0][:,:]*scale_factor/h #stellar particle position in 
mass = data[1][:]*10**10/h # [M_sun]
metallicity = data[2][:] # Z
SFT = data[3][:] # a at the time of the stellar particle formation
vel = data[4][:,:]/np.sqrt(scale_factor) # stellar particle velocity

2) In the _TNG50GalaxyFiles.ipynb notebook, when one is defining the galaxy, they also need to specify its redshift. Is that the TNG_snap_redshift or the obs_redshift?

Also, is arr_down_miii+arr_down the variable that's stored in the data cube fits files at https://www.tng-project.org/api/TNG50-1/files/imanga/?

4) There seemed to have been a small error in the code eta = cosmo.lookback_time(RedStar). Since RedStar is sometimes negative, this returned an error a few times. To fix this, I changed it to eta = cosmo.lookback_time(RedStar[RedStar >= 0]). Is that ok? Can that be a possible problem down the line?

5) In the same file, there seems to be a scale_factor multiplication missing that's making the code return blank data. I fixed it by changing the line that defines the center of mass to this: SHP = API.getSubhaloField('SubhaloPos', simulation='TNG50-1', snapshot=snap)[gal]*scale_factor/h and that seems to fix the problem!

Thanks Akshat Tripathi

akshattripathi commented 5 months ago

I guess I also had to add the lines:

SFT = SFT[np.where(RedStar>=0)]
RadiusStar = RadiusStar[np.where(RedStar>=0)]

below: Eta = np.asarray(eta) - eta_galaxy

akshattripathi commented 5 months ago

I also had to add the scale factor multiplication to the line: HMSR = API.getSubhaloField('SubhaloHalfmassRadType',snapshot=snap,simulation='TNG50-1')[gal, 4]*scale_factor/h

This makes the results look more reasonable but still not as good. I'm testing it using the code:

orig = open('Data/snap96gal3/snap96gal3.dat')
new = open('snap96gal3/snap96gal3.dat')

x_orig, y_orig, z_orig = [], [], []
for line in orig.readlines():
    x_orig.append(float(line.split()[0]))
    y_orig.append(float(line.split()[1]))
    z_orig.append(float(line.split()[2]))

x, y, z = [], [], []
for line in new.readlines():
    x.append(float(line.split()[0]))
    y.append(float(line.split()[1]))
    z.append(float(line.split()[2]))

plt.scatter(x_orig, y_orig, c='b', marker='.');
plt.scatter(x, y, c='r', marker='.');
lonanni commented 5 months ago

Yes, the file is provided for only one galaxy to run the example code. Files for other galaxies can be generated from TNG50_GalaxyFiles.ipynb. Please, bear in mind, as stated, that these codes are example codes that are meant to be adapted for your scientific case. These examples help getting started on a project involving either Illustris, or EAGLE.

  1. yes, it is the scale factor a = 1/(1+z), as in comments
  2. As for the definition given in the VAC page on the Illustris website, TNG_snap_redshift is the redshift the simulated galaxy is at, while obs_redshift is where we place the galaxy during a mok MaNGA observation (refer to Nanni et. al 2023 a for an explanation on why this is done and how).
  3. Yes, arr_down_miii+arr_down is what is stored in the datacubes provided.
  4. Refer to the Illustris Specifications page to understand the SFT values (https://www.illustris-project.org/data/docs/specifications/). Quoting the Illustris page "Note: The only differentiation between a real star (>0) and a wind phase gas cell (<=0)". If you want to keep only stellar particles, that cut is fine.
  5. API.getSubhaloField('SubhaloPos', simulation='TNG50-1', snapshot=snap)[gal] returns data, which then you can put in the frame of reference you want with a conversion. The multiplication for any factor cannor resolve a blank data problem. Maybe the API wasn't working at that time. Always look at the Specification page to know the units and do the conversions for your scientific application.
  6. Check what are the scale factor you're including and if you included them at that point also into the single stellar particle values, you might be including them in the general galaxy property and not in the single stellar particle.