Closed mmcfarland closed 4 years ago
Copied datasets from staging to prod:
aws s3 sync s3://beekeepers-staging-data-us-east-1/5km/2019/ s3://beekeepers-production-data-us-east-1/5km/2019/
aws s3 sync s3://beekeepers-staging-data-us-east-1/3km/2019/ s3://beekeepers-production-data-us-east-1/3km/2019/
I then cut over the application to use those datasets by syncing the VRT files for 3 and 5km paths
aws s3 sync s3://beekeepers-staging-data-us-east-1/3km/ s3://beekeepers-production-data-us-east-1/3km/ --exclude "*" --include "*.vrt"
aws s3 sync s3://beekeepers-staging-data-us-east-1/5km/ s3://beekeepers-production-data-us-east-1/5km/ --exclude "*" --include "*.vrt"
After updating the data, I was receiving the same 3km exceptions for all new states that were generated after the same steps were taken on staging, but which disappeared after a day. Because old states, but with new rasters, still worked, I believed that GDAL may have been caching metadata from the original VRTs. I restarted the app server process on production (sudo service icp-app restart
) and it now fetches correctly (@TaiWilkin this explains our inconsistent problems).
@mmcfarland Wow, great catch and troubleshooting!
Following work completed in #586 and approval by the client, copy the new
tif
files to the productionbeekeepers-production-data-us-east-1
bucket.Perform a cut-over of the data by locally backing up the existing VRTs and uploading the new VRTs. Verify the application is functioning nationally. After signoff from the client, delete the old the
tif
files.