Closed andrewkrug closed 7 years ago
I should also note here that if the lambda functions are writing direct to s3 we'll need to use a custom lambda execution role that gives the lambda function s3:putObject and nothing more on the bucket "threatresponse.showdown"
5th option... we could also consume these from CloudWatch events.
s3 seems the simplest?
@jeffbryner that's the direction I'm going ... I'm going to expand the scope of the CI/CD workflow this evening to accommodate.
You may have noticed I refactored the post call out into a function where I'm calling open on the request. From there can catch the exception and fall back to S3 if anything but a status code 200 is returned. https://github.com/ThreatResponse/python-lambda-inspector/blob/master/main.py#L134
Agreed with S3 - no need to add another component.
We might encounter similar problems with other cloud environments, but can deal with those when we come to them (or maybe use their equivalents).
Closed in a81acf6412104bb0769891a033a1847d9e3a37d3
Turns out lambda functions don't get any access to the internet without the presence of a cost prohibitive NAT gateway. This means that lambda functions running inside of the ThreatResponse AWS account will need to POST their results in a different way than runtimes out in the wild.
Potential options are:
So I'm sure that you've gathered option 1 is preferred. It's just a matter of writing a little logic that only does the S3 upload if you're running from within a lambda function. We'll still need the urllib2.Request method in the python profiler that @jeffbryner wrote. Oddly it doesn't actually cause the function to fail. Simply nothing ever happens...
Option 4. isn't a bad choice either but has implications for if / when we want to go multi-region and puts heavier requirements on the CI/CD pipeline to attach things.