aws / aws-connected-device-framework

Apache License 2.0
62 stars 38 forks source link

Assetlibrary History Lambda Cost Increase #191

Open aaronatbissell opened 8 months ago

aaronatbissell commented 8 months ago

Aws Connected Device Framework Affected Module(s):

assetlibrary-history **I'm submitting a ...**

Description:

It appears that the recent update to assetlibrary history that increased the lambda function size from 128MB to 512MB has increased our lambda cost by about $80/day.

We are using about 12,400,000S per day. At a cost of 0.0000166667 per GB-S, the costs are as-follows based on your lambda size:

It appears as though the increase in lambda memory size hasn't decreased the runtime of the lambda significantly enough to decrease the cost back to normal levels. I think this is probably because the history lambda is processing single records and 90+% of the lambda runtime is just loading Node, dependencies, etc. Very little time is spent actually processing the request.

Current behavior:

Lambda costs increased

Expected behavior:

Lambda costs shouldn't increase exponentially

Steps to reproduce:

Additional Information: The lambda is using ~240MB of memory per invocation so bringing the memory back down to 128MB is not an option and 256MB seems too close to the limit for comfort. I believe this is very closely tied to #87 describes a similar problem with device monitoring and #88 would decrease the effect of this problem greatly.

aaronatbissell commented 8 months ago

Wanted to keep this thread up to date. It looks like the provisioned capacity is another big reason why our costs are so high. The current read/write capacity is too low (currently 5). Increasing this to 10 helped significantly, but this has to be done manually because this value isn't exposed through the config. #192 should take care of that

anish-kunduru commented 8 months ago

@aaronatbissell Out of curiosity: have you tried testing with 768 or even 1028?

If the problem is that cold starts are taking so long that subsequent requests can cause additional lambdas to spin up, that might help reduce costs. If this isn't the case, and you're blocked by I/O, it'll actually cost more.

aaronatbissell commented 8 months ago

About 80% of this problem was due to the provisioned capacity problem I mentioned above. It appears that when we run into provisioned capacity problems, the reads/writes take a long time to fail, which increases the duration of the lambda. This increase in duration caused a major lambda cost increase when it's running millions of times per day.

The other 20% I think is due to cold starts, which I'm hoping to take care of with #88