Closed hotgazpacho closed 5 years ago
This is a great point, @hotgazpacho. When I wrote that post I didn't think about the cold start penalty that starting up an ENI could cause. I think your proposed solution is a really good idea. The RedisFeatureStore could be (easily?) adapted to use DynamoDB for persistence instead.
The RedisFeatureStore does look fairly straightforward. The devil, of course, is in the details. I have a pretty hectic week ahead of me, but I’ll see if I can find some time to take a run at this later this week or early next.
That'd be amazing if you can find the time! Let me know if you hit any snags.
I actually ran into this myself yesterday. What a coincidence. 😃
I think the penalty with Redis is even worse because all Lambda functions using feature flags (cached in Redis) need VPC access, not just the one fetching the data.
To make a long story short, I've already started working on a DynamoDB store for the Go client.
That’s great @mlafeldt! Looking forward to seeing what you come up with!
interesting.. that saved me some time :) look forward to ddb impl -fyi @Neko-Design
FYI: I've open sourced my DynamoDB feature store implementation:
https://github.com/mlafeldt/launchdarkly-dynamo-store
It's for the Go SDK, but it should be straightforward to port the code to other languages.
@mlafeldt, great! I'll take a look at it ASAP. This is the first custom feature store that external devs have built, at least that we know of. I hope the process was relatively painless.
@eli-darkly Cool! The process was pretty painless, in part thanks to the feature store test suite. Also, it definitely showed that you put a lot of thought into designing the data structures and interfaces. 👍
I think it might make sense to add the DynamoDB store to the official client at some point - either now or after stabilizing it a bit more. Let me know what you think.
@mlafeldt I think we probably do not want to package it in the same module as the official client, just because that would bring in dependencies that people who don't use DynamoDB won't want. If we were to do it all over again, we probably would have put the Redis implementation in its own project as well for the same reason.
Whoops, sorry - the above comment was in reference to the original proposal, adding DynamoDB to the Node client. For the Go client, we could certainly include your code in a subpackage just as we've done for Redis so it wouldn't have to bring in unwanted dependencies if not used.
This got pushed back much later than we had expected, but we are now very close to ready: the Go and Node SDKs will be the first two to get DynamoDB support, and we expect to release those within a week. (Mathias, the Go version is based on your code—many thanks for that—but we did end up needing to make a few changes, so once that is released, you will probably want to either update your project or drop it.)
We will also be updating the LD relay proxy so that it can populate a DynamoDB table the same way you can currently use it to populate Redis. However, as Mathias noted in the examples in his repo, the relay may be a bit heavy-weight as a solution in Lambda, so it is also possible to just run a very minimal app that starts up a LaunchDarkly client as needed to populate the table. We're putting together some additional documentation to cover various serverless scenarios.
OK! Finally, we have released this for both Node and Go.
For Node, you will need to upgrade the SDK to v5.6.1, and add the package ldclient-node-dynamodb-store
. For Go, you will need to upgrade the SDK to v5.5.1 and you will find lddynamodb
as a subpackage.
API and usage details are in the code, and also on this new reference guide page.
Note that the LD relay proxy does not yet support DynamoDB, although we are adding that very soon. So if you want to create a setup where one process connects to LaunchDarkly and populates the database with flags, and other processes get the flags only from the database and do not connect to LaunchDarkly—equivalent to the "daemon mode" configuration that the relay proxy can provide—for now your best bet would be to do something like what @mlafeldt describes in the repo that was linked earlier: run a very minimal LD-enabled application, configured to use the same database table, that just starts a SDK client.
is the performance penalty mentioned here still relevant after https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/
cc @eli-darkly
@bdwain If you mean "does AWS's change mean that using Redis from a Lambda will perform better than it used to"... it's possible; it does sound like the intention was to support such use cases better. But we haven't tried it. In general, we do not have much data on the various ways the SDKs can be used in Lambda or similar frameworks; we have just tried to provide what's necessary to make them usable, and we've relied on customers to say how well that is working for them.
got it thanks! I will look into it.
The blog post Go Serverless, not Flagless: Implementing Feature Flags in Serverless Environments gives a great overview of implementing feature flags in a serverless architecture. However, the solution proposed doesn't take into account the complexities of using Redis with Lambda, nor the performance hit that incurs. When you spin up a Redis Elsaticache instance, it must be provisioned inside a VPC. However, that now means that the lambda must attach itself to the VPC to access it.
From the AWS docs:
Associating a lambda with an ENI has a very high cold start performance penalty, on the order of tens of seconds.
This all combines to pull us kicking and screaming out of the serverless world. To counter this, I propose adding a DynamoDB-backed Feature Store. While certainly not as fast as Redis, it is a serverless service, and has some of the same features as Redis.