markwest1972 / smart-security-camera

A Pi Zero and Motion based webcamera that forwards images to Amazon Web Services for Image Processing
GNU General Public License v3.0
112 stars 33 forks source link

Rekognition image assessment failing #2

Closed hylton1995 closed 7 years ago

hylton1995 commented 7 years ago

Hi there

Firstly thank you for this project! I have been trying to get this working for a few weeks now, and though maybe you could suggest what I am doing wrong here. Everything works fine up to the end of the first step-engine function (rekognition-image-assessment). According to the step engine the function completed (is green), and then directs to the nodemailer-error-handler (which is red). The output of the rekognition-image-assessment function lists: {"Error":"Lambda.Unknown","Cause":"The cause could not be determined because Lambda did not return an error type."}. I have added a few console.log statements to the rekognition-image-assessment code and tested, and it gets successful params, and I can output the request variable after the if/else loop. However the console.log statements inside the if/else loop never log, and I am not 100% sure how to read the request var output. Is this something you came across at all? Apologies for this request, but this is my first attempt with AWS, and my last programming experience was VN.net many years ago.

Thanks!

dimitrystd commented 7 years ago

Try to increase lambda Timeout from 3s to 5 or even 10s. I have seen something like this. But i found in log that lambda was terminated by timeout. When i increased timeout then actual duration was about 4.2s.

markwest1972 commented 7 years ago

Have you tested the rekognition-image-assessment in isolation, or have you only tested it as part of a step function call?

Try setting the timeout to 10 seconds, as @dimitrystd suggests. You may not be giving Rekognition enough time to finish processing. Let us know if that helps!

hylton1995 commented 7 years ago

I had only tested it as part of the function call. I can't believe it was that simple! I increased the timeout to 10 seconds (I had already increased it to 4s before), and the step engine now runs successfully through to the nodemailer-send-notification function. However this fails with:

"errorMessage": "Error in [nodemailer-send-notification].\r Function input [{\n \"Alert\": \"true\",\n \"Labels\": [\n {\n \"Name\": \"Human\",\n \"Confidence\": 99.31401824951172\n },\n {\n \"Name\": \"People\",\n \"Confidence\": 99.31543731689453\n },\n {\n \"Name\": \"Person\",\n \"Confidence\": 99.31543731689453\n },\n {\n \"Name\": \"Man\",\n \"Confidence\": 85.11573028564453\n },\n {\n \"Name\": \"Portrait\",\n \"Confidence\": 63.546775817871094\n },\n {\n \"Name\": \"Selfie\",\n \"Confidence\": 63.546775817871094\n },\n {\n \"Name\": \"Female\",\n \"Confidence\": 52.66830825805664\n }\n ],\n \"bucket\": \"motion-storage\",\n \"key\": \"upload/How to enrage a geek.jpg\"\n}].\r Error [Error: Invalid status code 403]."

I will re-check the code and re-build the zip file, test and let you know. Thank you!

markwest1972 commented 7 years ago

@hylton1995 great news! @dimitrystd thanks for a good suggestion! I'll add a note to the instructions so that the next person doesn't have to struggle with the same issue :)

dimitrystd commented 7 years ago

Error [Error: Invalid status code 403]."

@markwest1972 I had this error too, and you covered this thing in description but may i ask to make it more clear.

In the preferences for your s3 Bucket, grant 'List' permission to any authenticated AWS user.

Can you show how exactly this policy should look? Because with new aws console you have to provide policy. An example, my one look like s3 management console000905

But i think it can be more strict (like allow access only for particular lambda or user\role from my aws account)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:lambda:eu-west-1:923368673111:function:nodemailer-send-notification",
                    "arn:aws:lambda:eu-west-1:923368673111:*"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::xxxxxxxxxxx",
                "arn:aws:s3:::xxxxxxxxxxx/*"
            ]
        }
    ]
}

But example above doesn't work. Let me know if you found root cause why special permissions are required.

P.S. Also List permissions were not enough i had to add Get

markwest1972 commented 7 years ago

@dimitrystd I'll definitely have a look at this. I'm flat out with work and family at the moment, so it may take a couple of days.

markwest1972 commented 7 years ago

@dimitrystd in the mean time, here are the permissions as I have defined them. Note that I haven't defined a bucket policy. @hylton1995 this may resolve your issue.

screen shot 2017-05-09 at 07 42 06

markwest1972 commented 7 years ago

Have updated the instructions for setting the S3 permissions.

hylton1995 commented 7 years ago

@markwest1972 apologies for the late reply, its been a hectic week, and I don't seem to be getting notifications of these post updates! I have re-built the node_modules download (did it on the Pi itself), added a few console.log statements to the code to try get more info, and looked at quite a few online posts. I"m still getting that error (Invalid status code 403), but I can't find any error code 403 that matches that description. Below are the error outputs, as well as the transport setup and mail options var outputs:

Error: 2017-05-13T10:18:10.927Z 75aa9dcd-37c5-11e7-84b7-29c9f9aaa839 Error: Invalid status code 403 at ClientRequest. (/var/task/node_modules/nodemailer-fetch/lib/fetch.js:177:36) at emitOne (events.js:96:13) at ClientRequest.emit (events.js:188:7) at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:473:21) at HTTPParser.parserOnHeadersComplete (_http_common.js:99:23) at TLSSocket.socketOnData (_http_client.js:362:20) at emitOne (events.js:96:13) at TLSSocket.emit (events.js:188:7) at readableAddChunk (_stream_readable.js:176:18) at TLSSocket.Readable.push (_stream_readable.js:134:10)

Transport setup: Transport setup: Nodemailer { domain: null, _events: {}, _eventsCount: 0, _maxListeners: undefined, _options: {}, _defaults: {}, _plugins: { compile: [], stream: [] }, transporter: SESTransport { options: { ses: [Object] }, ses: Service { config: [Object], isGlobalEndpoint: false, endpoint: [Object], _clientId: 1 }, rateLimit: false, queue: [], sending: false, startTime: 0, count: 0, name: 'SES', version: '1.5.1' }, logger: { info: [Function: info], debug: [Function: debug], error: [Function: error] } }

Mail Options: Mail Options: { from: 'hylton@example.com', to: 'hylton@domain.com', subject: 'Alarm Event detected!', text: '[{"Name":"Human","Confidence":99.31401824951172},{"Name":"People","Confidence":99.31543731689453},{"Name":"Person","Confidence":99.31543731689453},{"Name":"Man","Confidence":85.11576843261719},{"Name":"Portrait","Confidence":63.54676055908203},{"Name":"Selfie","Confidence":63.54676055908203},{"Name":"Female","Confidence":52.668296813964844}]', html: '

[\n {\n "Name": "Human",\n "Confidence": 99.31401824951172\n },\n {\n "Name": "People",\n "Confidence": 99.31543731689453\n },\n {\n "Name": "Person",\n "Confidence": 99.31543731689453\n },\n {\n "Name": "Man",\n "Confidence": 85.11576843261719\n },\n {\n "Name": "Portrait",\n "Confidence": 63.54676055908203\n },\n {\n "Name": "Selfie",\n "Confidence": 63.54676055908203\n },\n {\n "Name": "Female",\n "Confidence": 52.668296813964844\n }\n]
', attachments: [ { filename: 'How to enrage a geek.jpg', path: 'https://s3-eu-west-1.amazonaws.com/motion-storage/upload/How to enrage a geek.jpg' } ] }

I have tried with these permissions as well, and the same result. s3 permissions

Does either of the sending or receiving mail domains need to be verified in SES? Thank you!

hylton1995 commented 7 years ago

Quick update, as @dimitrystd suggested, I added a policy to the s3 bucket, see below. After upping the timeout on the s3-image-archive function, everything completes successfully, and I get the mail.

This policy obviously isn't very secure, so I assume I would need to change the action from "", to the individual Gets and Lists? { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "" }, "Action": "", "Resource": [ "arn:aws:s3:::Bucket-name", "arn:aws:s3:::Bucket-name/" ] } ] } Thanks!

markwest1972 commented 7 years ago

@hylton1995 Great to hear that everything is now working as expected!

I'm going to try and find out why you guys need to set a policy, when I don't have to. Could be I've tweaked a setting somewhere else and forgot to document it.

As for making the access to S3 more secure, I agree that this should be done. I'll try and look at it this week. I'm heading to a couple of conferences so should have some available time in the evening.

markwest1972 commented 7 years ago

Gonna close this issue now as the root cause is fixed.

markwest1972 commented 7 years ago

@hylton1995, @dimitrystd : Regarding the S3 Bucket Policy - this should be set to the ARN for the IAM Role associated with the Lambda Function. I've changed README.MD to reflect this.