iann0036 / former2

Generate CloudFormation / Terraform / Troposphere templates from your existing AWS resources.
https://former2.com
MIT License
2.21k stars 267 forks source link

UnhandledPromiseRejection #215

Open orenbenya opened 2 years ago

orenbenya commented 2 years ago

This error occurred on only 2 accounts we have, (We have many..) Version is updated to 0.2.63 It happens on EC2 service, but this could be an oversight since the EC2 is the first on our list.

(0/1 services completed)node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was{
  code: 'ERR_UNHANDLED_REJECTION'
}
iann0036 commented 2 years ago

Hi @orenbenya,

Thanks for raising. Is there any additional information if you run the same with the --debug flag? Does this occur if selecting just the EC2 service with --services or with other services?

orenbenya commented 2 years ago

One thing that might be important, I'm running it inside a python script using: os.system('former2 --options')

- When running with Debug option I got: (EC2 is first on the list) A Really long list of:

'Too many requests for EC2.describe, sleeping for SomeTime ms' 'Too many requests for ELBv2.describe, sleeping for SomeTime ms'

node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was{
  code: 'ERR_UNHANDLED_REJECTION'
}

- When running this list when EC2 is the last: ["Lambda","ECR","ECS","EKS","RDS","DynamoDB","VPC","Route53","EC2"] It didn't happen in any of the other services, only for the EC2.

- When running without any service filter:


( 'exi' error is in the source, not a pasting issue)

Error calling EC2.describeVolumes. The volume 'vol-123abc' does not exi.  
Trace: InvalidVolume.NotFound: The volume 'vol-123abc' does not exist.
    at Request.extractError (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/services/ec2.js:50:35)
    at Request.callListeners (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /usr/lib/node_modules/former2/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:688:12)
    at Request.callListeners (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  code: 'InvalidVolume.NotFound',
  time: 2022-02-27T18:08:34.513Z,
  requestId: '567asd-98as-09as-55nn-2309ssaxc',
  statusCode: 400,
  retryable: false,
  retryDelay: 48.39103448063582
}
    at f2trace (/usr/lib/node_modules/former2/main.js:135:42)
    at Response.eval (eval at <anonymous> (/usr/lib/node_modules/former2/main.js:119:1), <anonymous>:266:25)
    at Request.<anonymous> (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:367:18)
    at Request.callListeners (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /usr/lib/node_modules/former2/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/usr/lib/node_modules/former2/node_modules/aws-sdk/lib/request.js:38:9)
node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was{
  code: 'ERR_UNHANDLED_REJECTION'
}
iann0036 commented 2 years ago

Hey @orenbenya,

I've added some extra error handling around EC2.describeVolumes specifically to try to avoid this issue, available in 0.2.64. Give it a try to see if it helps.

Is there anything special about the volume ID in your error log? Perhaps it is shared in via RAM or similar?

orenbenya commented 2 years ago

Hi, I've ran the process with the new version, issue still persists. Still getting the below error, although now it doesn't say on which volume it fails on, (On the other hand, I've downgraded to 0.2.63 and I still can't see the volume name as before, ran all kind of test options as before) There no issue with the volumes as I see it from my end, nothing fancy like you asked.

node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^
[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was{
  code: 'ERR_UNHANDLED_REJECTION'
}
iann0036 commented 2 years ago

Looks like I might be going down the wrong path.

Could you use the new latest version 0.2.65 and include the --debug option in your call options? This will produce a pretty large output with args dumped to the console (use caution with logs) but should tell us what is happening immediately before the crash.

orenbenya commented 2 years ago

Updated to 0.2.65, When running only on EC2 it fails on another location (used to be EBS volume), now 1646219383449: ELBv2.describeRules

1646219383449: ELBv2.describeRules - {"ListenerArn":"arn:aws:elasticloadbalancing:REGION:ACCOUNTID:listener/net/123abc/456def}
node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not h{
  code: 'ERR_UNHANDLED_REJECTION'
}

When running on all services: 1646219815748: ELBv2.describeListeners


1646219815748: ELBv2.describeListeners - {"LoadBalancerArn":"arn:aws:elasticloadbalancing:REGION:ACCOUNTID:loadbalancer/app/789jhi}
node:internal/process/promises:265
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not h{
  code: 'ERR_UNHANDLED_REJECTION'
}
iann0036 commented 2 years ago

Hey @orenbenya,

Thanks for that. I've added extra protection against all EC2 calls, so we should hopefully have it now. Could you re-test against 0.2.66 and let me know if that helps?

orenbenya commented 2 years ago

I have some good news and some bad news.. When running for all services, It didn't fail. But this is the error I got during.

1646338173194: EC2.describeSecurityGroups - {} Trace: TypeError: Cannot read properties of undefined (reading 'S3DestinationDescription')} at eval (eval at (/usr/lib/node_modules/former2/main.js:122:5), :1181:62)
(Not full error, it was extremally long)

When running per service, it didn't fail and I haven't seen any errors.

Should we call it a win, or are the errors being suppressed ?

iann0036 commented 2 years ago

That's great. I'd say the original error we got is now fixed (via suppression, for some specific edge case, that hopefully doesn't contain useful information).

S3DestinationDescription seems to only exist in relation to the Kinesis.describeDeliveryStream method, so unrelated to EC2.describeSecurityGroups and just due to multiple calls happening in the background at once. I did spot a small error in the handling of Kinesis streams with Elasticsearch destinations, so pushed a fix for that in 0.2.67.