Open kxyne opened 7 years ago
@kxyne This is not RDS but Redshift error and yes we may lack disk space there, as we currently have the default (smallest) Node type.
Capacity Details
Current Node Type - dc1.large
CPU 7 - EC2 Compute Units (2 virtual cores) per node
Memory - 15 GiB per node
Storage 160GB SSD - storage per node
I/O PerformanceI/O - Moderate
PlatformProcessor- 64-bit
this is the screenshot after today's run
Definitely the cause, however it seems to sit at 45% full all the time.
Is there another db on it? I'll spin up a new redshift for this run but we need to clean up any old datasets on it too.
Definitely the cause, however it seems to sit at 45% full all the time.
Is there another db on it? I'll spin up a new redshift for this run but we need to clean up any old datasets on it too.
@kxyne My guess here is that first query was executed fine, which loads scanned data into logentry
table and went out of space while second query was executing and exited with error. This left first table full. I checked right now the dev database and logentry
table is full.
Aggregator script drops all tables before it starts load and after aggregated data is unloaded to S3 successfully.
I dropped table logentry
manually and used disk space went towards 0
Ah, so it's full because of the previous run, that makes sense, sorry, a bit tired :)
It looks like we're short on space in the current RDS instance, are there other DBs in it @zelima or do we need to reinstantiate with more disk?