MobilityData / mobility-feed-api

Apache License 2.0
8 stars 3 forks source link

feat: validate downloaded files as ZIP before processing dataset #501

Closed cka-y closed 3 months ago

cka-y commented 3 months ago

Summary:

This update enhances batch processing by validating that the producer URL returns a zipped file before saving to GCP and updating the database. As part of this improvement, I also removed from GCP and the database all datasets and related entities that were not zipped files and not linked to other records. Below are the details of all invalid datasets per environment:

Expected Behavior:

The system should check if the producer URL returns a zipped file. If the URL does not return a zipped file, the dataset should not be stored in GCP or persisted in the database.

Testing Tips:

Please make sure these boxes are checked before submitting your pull request - thanks!

cka-y commented 3 months ago

Most skipped datasets are not valid ZIP files. However, there are a few cases where accessing the producer_url via a web browser successfully downloads a valid ZIP file, but programmatically attempting to replicate this process results in an HTML file instead. An example for mdb-683 returns:

<html>
<head><title>410 Gone</title></head>
<body>
<center><h1>410 Gone</h1></center>
<hr><center>openresty</center>
</body>
</html>

Here is a list of stable IDs and producer URLs where this issue is known to occur (note that there may be others):

cka-y commented 3 months ago

Here is the list of feeds in PROD with no datasets. Note that this list might reduce if we run the batch-datasets function. However, we should not run it before updating the batch-process-dataset to ignore invalid ZIP files; otherwise, I'll need to restart the process. (cc: @emmambd)

prod-feeds-w-no-dataset.csv

cka-y commented 3 months ago

To avoid manually deleting files and database entities again, we should consider the following options:

I recommend Option A, depending on how long we need before releasing. However, if the scheduler is paused for too long, we might miss a significant number of historical datasets. Thoughts? cc: @emmambd @davidgamez

davidgamez commented 3 months ago

To avoid manually deleting files and database entities again, we should consider the following options:

  • Option A: Pause the batch-datasets scheduler in PROD to prevent storing new datasets, some of which might be invalid ZIP files.
  • Option B: Update the batch-process-datasets function in PROD by running the datasets-batch-deployer-prod GitHub Action without releasing (since we are not ready for a full release).

I recommend Option A, depending on how long we need before releasing. However, if the scheduler is paused for too long, we might miss a significant number of historical datasets. Thoughts? cc: @emmambd @davidgamez

I prefer Option A. We can connect tomorrow at stand-up to make a decision.

emmambd commented 3 months ago

Option A worries me significantly if we're waiting a full month. Let's discuss.

emmambd commented 3 months ago

@cka-y based on your CSV, it looks like there's about 200 feeds that don't have any real value (don't return a dataset). From a quick glance, I think half of these probably are legitimately "deprecated" (meaning I can find a replacement feed for them), but there will also be some where we have no idea if the feed has truly been replaced or if it's just down.

We might need an additional issue to return some kind of "non-fetchable" message in the API response and in the website, for this use case.

emmambd commented 3 months ago

Created #506 and #505 to address what's out of scope for this PR