Open tpmai22 opened 2 years ago
@manekenpix might know if we're running the latest autodployment server in prod, we might not be.
I think production
is running the latest version of the autodeployment
server. The issue is that, at least once a week, we suffer a power outage in our "server room" and, when the machines come back up and the autodeployment
server is relaunched, it has no build record to serve, and I think that's why the dashboard displays the error message.
"Server room" :)
A perfect opportunity to use Supabase and the Storage API! We can push build logs and GitHub data about the build into a Supabase storage bucket, and key the name on the git sha. So maybe for
// Upload Log
const { data, error } = await supabase.storage
.from('builds')
.upload(`public/${sha}/log`, log);
// Upload Info
const { data, error } = await supabase.storage
.from('builds')
.upload(`public/${sha}/json`, buildData);
Now we can access these with public URLs.
I'd love to see someone other than @DukeManh or me tackle this one, and we can mentor/review. How about it, @joelazwar, @rclee91, @TueeNguyen, @sirinoks, @menghif (since you were on the Supabase call today)?
There is another part to this: using these logs in the Dashboard front-end. We might also want to create a table that keeps track of every build we did: git sha, result (did it build?), links to objects in buckets.
O_O what even is sha
I'm happy to take this.
@sirinoks, it's the id of the commit, it is hashed using SHA (Secure Hash Algorithm). We usually call it sha
I totally agree, Supabase needs to be maintained by more people. Also, the best way to understand Supabase is to do something with it.
@humphd, why not use the database but the storage API?
They are just structureless blobs of data, they don't really belong in the db imho. This is perfect for object storage.
I guess we're going to upload .txt
file to storage and read it?
Log files are text/plain
and the data from GitHub is application/json
. I wouldn't add .txt
, though.
If we create a JS Blob object to upload, its type would still be 'application/json', we could store JSON directly in Postgres for simplicity.
This is always true. You can technically put anything in postgres. But the value of blob storage is that for arbitrary data you aren't indexing, it's just a blob you can request later by URL.
Managing more tables, doing backups, etc. seems like overkill for logs.
Gotta look into what type of file we're going to upload to the storage
What happened: Build log not showing on
production
What should have happened:
How to reproduce it (as precise as possible):
Anything else we need to know?: https://telescope.cdot.systems/deploy/status return
null
on all builds Environment: