initstring / cloud_enum

Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
MIT License
1.64k stars 241 forks source link

Fixed issues with running out of resources and connection timing out #27

Closed nalauder closed 4 years ago

nalauder commented 4 years ago

Had issues with program quitting when encountering a Timeout exception, now handles more gracefully.

Was also running out of resources due to thread pools being created but not closed.

initstring commented 4 years ago

Thank you so much for contributing! This is great, I didn't know about the need to close the thread pools.

I do have a question on this. I wonder if silently passing on a DNS timeout might cause someone to be unaware that all their queries are failing.

See this recent ticket, where it seems the user had DNS resolution issues in general: https://github.com/initstring/cloud_enum/issues/23

Do you think it might be better to send an alert in the UI during a timeout? I have no issues with timeouts. If you do a print statement in one of your runs, how many timeouts do you generally see?

Could your timeouts be related to the running out of resources? Like you had so many DNS queries pending?

This looks like a good edition either way, except I may put an alert to console on the timeouts while allowing the program to continue.

initstring commented 4 years ago

I've merged the changes to a dev branch but tweaked slightly: https://github.com/initstring/cloud_enum/tree/dev

Would you be able to test and let me know your results? Curious how many timeouts you are getting.

Basically, I added an alert to the console and also adjusted the timeout threshold down to 10 seconds. A 10-second long DNS query seems like it reasonably should fail, but let me know if you think otherwise.

Thanks again! Your contribution is greatly appreciated!!!

nalauder commented 4 years ago

Thanks for the inclusion.

I like your idea better with the alert to the user. It actually makes it helpful rather than just silently failing. This would help in cases where users could be getting different results from different hosts because they can actually see the errors.

I'm not so sure about the resource limits and timeouts being related though. I feel like this could be more to do with the python garbage collector not cleaning up old pools quickly, whereas the pool.close() will clean it up as soon as its finished. Which could then be because the pool was alive for longer because of the timeouts that the garbage collector waited longer to remove it because it may get reused. Either way, I haven't seen the resource errors since the inclusion and I was running on a host with limited resources and an older python version.

In a classic testing move, I am unable to reproduce the issue! So something else may have changed on my end since I actually made fix (about a week ago).

Big fan of the tool!

initstring commented 4 years ago

Thanks @nalauder !!

I'm keeping the pool close in, as it definitely makes sense. I'll merge to master soon.