astrocatalogs / tidaldisruptions

MIT License
5 stars 1 forks source link

Open Tidal Disruption Event Catalog

Build Status Python Version

This is the primary repository for the Open Tidal Disruption Event Catalog (OTC) which contains a Python script (import.py) that generates the event JSON files and additional scripts that process those files. This repository also contains the primary JSON catalog file generated by the make-catalog.py script, which is used to drive the web-based table available on sne.space. Because GitHub repositories are limited to 1GB each, individual event JSON files are stored in separate repositories that are collated by year; these repositories contain the entirety of the data collected by the OTC:

https://github.com/astrocatalogs/tde-1980-2025 – All discovered tidal disruptions.

Contributing Data

To contribute data to the repository, please read our contribution guide.

Format of Data Files

The data files are in JSON format, a detailed description of the particular structure we have chosen is available here.

Installing and Running OSC Import Script

If you are interested in reproducing the catalog on your own machine, installation can be done by following the install: subsection in the .travis.yml file, which installs all required modules via pip and clones all required input repositories. The OSC software is run in Python 3.5 and is not tested for compatability with Python 2.

After installing, navigate to the scripts folder and execute the import script,

cd scripts
./import.py

and the import process will begin. The first time you run the import may take over a day as the caches of many sources will need to be built from scratch (particularly the host images); typical run time for the import is a few hours, and in update mode (./import.py -u), typical runtime is less than an hour. Runtime can be reduced significantly by commenting out tasks in the task array near the top of the import.py file; the slowest import steps tend to be spectra imports so if you're just interested in testing you may want to comment these tasks out first.

Using the Collected OSC Data

There are several scripts in the scripts folder that use the produced datafiles to generate various data products, print out metrics, etc. The first command you should probably run is repo-status.sh, which will highlight changes in the output JSON files relative to the last pushed changes to these files. If the import ran successfully, these changes should be minimal and only consist of the most recent supernovae. Changes to the output repositories can be reverted by running the reset-repos.sh script. Don't be afraid to play around with the data!