Closed vrongmeal closed 1 year ago
Not really keen on using git lfs after reading this: https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-storage-and-bandwidth-usage#tracking-storage-and-bandwidth-use
Not really keen on using git lfs after reading this: https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-storage-and-bandwidth-usage#tracking-storage-and-bandwidth-use
Does this every CI run will download this file via git-lfs
?
Not really keen on using git lfs after reading this: https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-storage-and-bandwidth-usage#tracking-storage-and-bandwidth-use
It's unlikely that we'll be making a lot of changes to the test files, so I don't see this being a concern for us (unless this also counts towards ci usage).
Our current limit is pretty reasonable:
And upgrading limits is pretty cheap:
Pretty sure it's on every CI run. The CI fails because it hasn't downloaded the file yet...
Also, found this: https://github.com/orgs/community/discussions/26775#discussioncomment-3253352
I'd say let's add a script prep-testdata
that either downloads the data from gcs or extracts from archive.
Well that's interesting. Agree with using a bucket. Went ahead and created a glaredb-testdata bucket here: https://github.com/GlareDB/cloud/pull/616. Also added you to the glaredb-artifacts
repo on google cloud, so you should be able to push stuff to that bucket. Github actions is already set up with a service account that has access to that project, so it should just be able to pull objects without issue.
Updated with GCS data!
Signed-off-by: Vaibhav vrongmeal@gmail.com