Closed jonny-rimek closed 3 years ago
537,6 mb file is Max Memory Used: 2726 MB
the same file can sometime consume atleast 300mb more than usually, definitely need to lower the base line to account for this variance
I'm not gonna do a lot of optimization at this point anymore
~450mb uncompressed, 40mb combressed
atm a 26MB single dungeon m+ log takes 140MB memory to process. Max log size should be at least 500MB
log file size and memory usage, irrc memory usage is accessible in the context that is given to the handler
I*d be willing to double the memory if the size it to small, but I need to test the performance.
The problem is that most of the time is spent uploading to timestream and for that more memory is useless and a huge waste of money.
I uploaded 15k single dungeon files with 10GB lambdas and it cost ~160€ including free tier, so it is very expensive. Timestream was ~60€.
solution1:
A valid approach could also be to upload multiple times to timestream in parallel using go routines. This is my preferred solution. check back with learning to with tests book for info about go routines.
solution2:
The solution would be to upload the parsed logs to s3 download with smaller lambda and upload to timestream. It's a lot of extra complexity, but the file would be relatively small, so it shouldn't be to be a problem.
But I won't make those changes until it proves to be a problem for my users or my budget