aria-jpl / standard_product

Apache License 2.0
0 stars 3 forks source link

scihub acq ingesters blocked by bad creds #17

Open niarenaw opened 3 years ago

niarenaw commented 3 years ago

The following jobs are blocked by what appears to be a credentials misconfiguration:

Error message:

Traceback (most recent call last):
  File "/home/ops/verdi/ops/scihub_acquisition_scraper/acquisition_ingest/scrape_apihub_opensearch.py", line 557, in 
    ESA_USER, ESA_SECRET = read_creds()
TypeError: cannot unpack non-iterable NoneType object
niarenaw commented 3 years ago

Need to check end points and configuration in read_creds function.

niarenaw commented 2 years ago

The read_creds function is pulling from /export/home/hysdsops/verdi/etc/settings.yaml on factotum. Not seeing any mention of anything ESA in there.

niarenaw commented 2 years ago

Updated settings.yaml on factotum and mozart. Daily scihub scrapers are now failing due to missing entry in datasets.json -- need to add entry on factotum and mozart.

niarenaw commented 2 years ago

tweaked regex in dataset.json for daily scrapers

niarenaw commented 2 years ago

The PGE is now running (and doing all the work correctly), but is failing with a 500 error when trying to publish it's dataset to GRQ.