Open james-s-tayler opened 5 years ago
https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
bonus points for being able to fake AWS.
The following is useful for working around quirks in localstack.
https://ignas.me/tech/localstack-s3-setup-net-core-development/
It turns out to get around the "No Route To Host" problem, it is necessary to supply AWS_ACCESS_KEY_ID and AWS_SECRET_KEY in the environment of each docker container that is interacting with the AWS SDK.
Localstack doesn't actually care what the values are, just that they are present, so you can put foo for both of them and it will work.
I've currently got this working in a state where localstack is spun up via docker-compose and configured via a bash script to provision an SNS topic and an SQS queue that is subscribed to that topic. If you call /api/payment and post a payment to an account not belonging to the bank then it publishes a message to the SNS topic which then puts a message on the SQS queue for processing.
Processing should run as a scheduled lambda that wakes up and pulls everything off the queue and batches it up. It should invoke an endpoint on FECS that will return the presigned URL of the outbound transactions bucket it needs to dump the payload for the batch of outbound transactions in.
While this is working somewhat there is an unresolved issue with docker networking and localstack. The message published to SNS topic gets picked up by SQS but the code waiting for the messageId back from SNS has it's connection closed prematurely and throws an exception, despite the fact that it actually managed to succeed publishing the message. I suspect it's networking issues with Docker and/or localstack.
Parking this for now until UI is complete as that is higher priority and this was just a fun side-project.
Really cool article series on how ACH works
https://engineering.gusto.com/how-ach-works-a-developer-perspective-part-1/
Picking this back up to play with some other AWS services. Mostly interested in playing with some CloudWatch/X-Ray and DynamoDB functionality.
Looking at the startup logs I have a sneaking suspicion the networking error I was seeing prior might be related to a .pem file that localstack is getting 'permission denied' on.
Learned something interesting about persisting data between restarts of localstack containers. There is only persistence for limited services like S3, DynamoDB, Kinesis and ElasticSearch (I think). However, you can map a volume containing bash scripts to /docker-entrypoint-initaws.d and it will run all the scripts there in alphabetical order after all localstack services have been brought up.
To get it to work I had to have the first script detect if there is any config/credentials file in /root/.aws and if not use heredoc to create a dummy one.
I also signed up for the free trial of LocalStack Pro to evaluate it. The dashboard is way nicer. Though sadly it's missing CloudWatch and CloudWatch Logs which is what I was hoping it had. Getting the cloud dashboard connected took a bit of work. You have to run the 'edge' service (SERVICES=edge,s3,lambda etc) and pass through you LOCALSTACK_API_KEY and you have to expose the edge service on a port / url where the cloud dashboard can see it. By default it wants you to expose port 443 -> 443 but there exist some configuration options both in localstack itself and in the cloud dashboard to allow you to configure the setup in a way that works for your environment. Took a little bit to figure out though.
One thing I have noticed is that once you have the connectivity established, it sometimes doesn't pick up your resources. In those cases I've found if you hit https://localhost/health (or wherever localstacks' health endpoint is exposed at) then it seems to pick it up after that.
I'm also having trouble with the SNS subscriptions. I've got a script that sets up an SNS topic, an SQS Queue and subscribes the SQS Queue to the SNS topic. It all works. If I query what queues, topics and subscriptions exist via the cli it all comes out correct and when I run a code path that puts a message on the topic it then gets sent to the queue. But in the cloud dashboard it always says the topic has zero subscribers. The messages get picked up by the cloud dashboard though. That's quite neat. UI is a little bit limited... would be a pain to work with at scale, but assuming you can freely do things like blow away all the messages on the queue then i'm sure it's easy enough to work with locally.
I think I'll merge this master as it is at the moment, park it as i'm not actually working on getting outbound exports fully working at the moment, just wanted to play with some AWS stuff. Will pick up development on the proper ticket.
This is just for fun and can be run totally optionally (off by default). Doing it to get some practice with AWS and because it's badass.
Have a way to send outbound transactions to a fictional reserve bank.
Naive implementation: Dump outbound transactions to configured S3 bucket as soon as transaction happens.
Better implementation: Raise a domain event when a transaction happens, put the message in an outbound processing queue. Have a lambda perform batch processing every X ticks (configurable) to simulate hourly interbank settlement process. The lambda simply prepares a batch file of all the outbound transactions currently in the queue and uploads it to the reserve banks S3 bucket at which point the reserve bank takes over to perform settlement.