atlassian / nucleus

A configurable and versatile update server for all your Electron apps
Other
393 stars 91 forks source link

fix(file-store): store lock file non-publicly where possible #64

Closed leofidus closed 5 years ago

leofidus commented 5 years ago

This mainly benefits S3-compatible storage providers that make generous assumptions about the cachability of public files (for example Google Cloud Storage). The lock file doesn't need to be publicly readable, marking it as private sidesteps caching issues.

The newly introduced isPublic property could also be useful for future storage provider integrations, and could be used to prevent useless CDN updates. For that purpose it might be useful to mark prerelease files in the same way, though this isn't part of this patch.

Let me know if you think some other acl on the spectrum between public-read and private is more reasonable.

Even though I would be mildly surprised if anyone actually depends on the lock file being publicly visible, this is technically a breaking change.

MarshallOfSound commented 5 years ago

@leofidus For clarification here, this needs to be added because GCP has a default cache on the lock file that causes incorrect lock / unlocked status to be reported?

Having that lock file public makes debugging infinitely easier when your infra is locked down and you don't have direct access to the S3 bucket. Would be good to find a line here, can we override GCP's caching behavior somehow or would that require a whole new storage provider?

leofidus commented 5 years ago

GCP by default caches public files for one hour. This isn't an issue when using the GCP API, but for some reason the S3 interop API is behind the caching layer. So if you publish a release, the lock file gets created, cached, deleted, but getFile still reports the file as existing for an entire hour.

Setting the file to private is the easy fix. Another approach is to set an appropriate cache-control header, which should also fix the problem. In fact I also do this locally, but in a very crude way (setting max-age of everything to 60s, when optimally some things should have no-cache (.lock), some a low cache time (RELEASE and similar metadata) and others shouldn't ever expire). Setting proper cache control headers for everything would probably be useful in general and also benefit the AWS usecase, as well as making CDNs more plug-and-play instead of writing custom invalidation code.

GCP over the S3 API is giving me enough problems that I will likely write a proper backend for it once I get around to it, which would solve the locking problem anyways.

MarshallOfSound commented 5 years ago

I think this is superceded by #65 , if not ping me and I'll re-open