Closed teuben closed 3 years ago
I just tried it out and it seems to work. It would take a while with a larger volume of data but runs in ~15 sec with the sample data. Some of the json files are empty because no satellite were detected in that 30min observation. Most of the files do have some content, for example 2019-10-10-00:00.json
begins with
[
{
"sat_id": [
"33063"
],
"time_array": [
1570636800.0,
1570636801.0,
1570636802.0,
1570636803.0,
1570636804.0,
1570636805.0,
1570636806.0,
1570636807.0,
1570636808.0,
1570636809.0,
1570636810.0,
1570636811.0,
1570636812.0,
The log files also seem to contain the correct info when I attempted it.
Satellite 25417 in 2019-10-10-03:30
Satellite 25417 in 2019-10-10-04:00
Satellite 25417 in 2019-10-10-05:30
Satellite 25417 in 2019-10-10-07:00
..
..
Did you run ephem_batch
before running ephem_chrono
? ephem_chrono
required data products created by ephem_batch
yes, i ran ephem_batch. In fact, I was a bit annoyed it stole all my cores, and I'm looking at a load of 21 right now! I'm running it again, just in case.... Is there no way to limit the number of threads to something via the command line? It took a long time to run, about 9 minutes on my first run, judging from the timestamps on the 71 plots I saw.
looking at "top" i see some o f the instances wiht a load > 1, which means it's overabusing the cores. If it's using numpy, maybe internally they already go parallel to some degree, but then by grabbing all my 8 cores (in fact, 4 cores and 2 threads that usually collide anyways), i've got a CPU trasher. As I'm writing this, my load is 23.
Okay, I'll add try and add an option with which to manually limit the number of cores used? I'll ping you when it's done
it seems tile_maps has the same behavior, but they are not going over 100% usage, so my load is only 12 now :-)
Good catch, I'll add an --max_core option to both of those tools
In my example run (taken form your examples verbatim) I also have an empty embers_out/sat_utils/ephem_chrono/2019-10-10-00:00.json
And to note: I just realized that a colon in the filename is the one that triggers an error message on vfat. I'll annote it on the other issue. if you cared.
Okay, I've updated all the paralelized tools - align_batch
, ephem_batch
, sat_channels
, rfe_calibration
, tile_maps
and compare_beams
cli-tools to have a --max_cores
option. This is set to None by default, which uses all available cores. Using this option users can limit resources available to these scripts. I've updated the documentation to explicitly point out that this option is available.
Please update to the latest version - EMEBERS 0.8.1 to try them out
ephem_chrono told me I could get some coffee... but it was done in a few secs. Indeed all the json files in embers_out/sat_utils/ephem_chrono/ we "empty" , just containing '[]'. The log file has 0 length.