nextstrain / zika

Nextstrain build for Zika virus
https://nextstrain.org/zika
8 stars 10 forks source link

Automation cache not working as expected #53

Closed joverlee521 closed 3 months ago

joverlee521 commented 3 months ago

I was following up on the automated workflows and noticed that this morning's run did not work as expected. The ingest job actually uploaded new files to S3. However the cache was still the same, so the phylogenetic workflow did not run.

This made me think that the S3 hash was using the default no_hash instead of the actual file hashes, which I verified in my own testing workflow. I tried removing the default hash and saw that the request was returning

<botocore.awsrequest.AWSRequest object at 0x7fe3b47d2e10>

Based on Stackoverflow, this is the result of not setting a region for the request. So setting the AWS_DEFAULT_REGION will fix this issue in our cache.

jameshadfield commented 3 months ago

this is the result of not setting a region for the request. So setting the AWS_DEFAULT_REGION will fix this issue in our cache.

Over the years I have run into a few different GitHub actions bugs because we weren't explicitly setting the region. I think it'd be good practice for us to habitually set it (just as we set the two secrets).

joverlee521 commented 3 months ago

Over the years I have run into a few different GitHub actions bugs because we weren't explicitly setting the region. I think it'd be good practice for us to habitually set it (just as we set the two secrets).

Yeah, I forget this all the time because the Nextstrain CLI does it automatically. I need to remember to provide it when running aws commands outside of a managed runtime...

tsibley commented 2 months ago

Yeah, I forget this all the time because the Nextstrain CLI does it automatically.

It should probably stop doing that. The behaviour made some sense originally when Nextstrain CLI was only used by us and only in much more limited use cases, but it's probably time to let go of that default.