pry0cc / axiom

The dynamic infrastructure framework for everybody! Distribute the workload of many different scanning tools with ease, including nmap, ffuf, masscan, nuclei, meg and many more!
MIT License
4k stars 622 forks source link

Linode Rate Limiting #411

Closed 303sec closed 3 years ago

303sec commented 3 years ago

It looks like there is potentially some kind of rate limiting on Linode instances when spinning up a lot of instances. I have confirmed with support that I can create 50 Linode boxes, but when I try to run something like axiom-fleet I get the following on some instances:

instance42 Request failed: 408                                                                                                                                                                                        
┌errors─┬──────────────────┐                                                                                                                                                                                       
│ field │ reason           │                                                                                                                                                                                       
├───────┼──────────────────┤                                                                                                                                                                                       
│       │ Please try again │                                                                                                                                                                                       
└───────┴──────────────────┘ 

I think this is some kind of Linode rate limiting, but I may be wrong.

303sec commented 3 years ago

I've tested this a few times and have had about 13-30 boxes successfully spin up, but not the full 50.

niemand-sec commented 3 years ago

I'm having the same issue with with linode. Getting the following errors while doing axiom-fleet:

Initialized instance 'test13' at '69.164.220.62'!
Initialized instance 'test16' at ''!
Initialized instance 'test19' at ''!
Initialized instance 'test11' at ''!ization...
Initialized instance 'test18' at ''!ization...
Initialized instance 'test09' at ''!ization...
Initialized instance 'test05' at ''!
Initialized instance 'test30' at ''!ization...
Initialized instance 'test21' at ''!
Initialized instance 'test14' at '69.164.220.XXX'!
Request failed: 408leet test initialization...
┌errors─┬──────────────────┐
│ field │ reason           │
├───────┼──────────────────┤
│       │ Please try again │
└───────┴──────────────────┘

If I try to spin 30 boxes, only a few get deployed (around 50%), and then when trying to run a scan with the previous deployed fleet I get these errors:

100%|����������������������������������������������������������������������������������������������������������������������������������������������| 18/18 [00:00<00:00, 36.64it/s]ssh: Could not resolve hostname test12: Temporary failure in name resolution���������������������������������������                               | 10/18 [00:00<00:00, 93.64it/s]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]
Generated 18 commands in total
Repeat set to 1
Warning: Permanently added '[69.164.220.XXX]:2266' (ECDSA) to the list of known hosts.
ssh: Could not resolve hostname test10: Temporary failure in name resolution
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]
ssh: Could not resolve hostname test04: Temporary failure in name resolution
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.3]
Warning: Permanently added '[172.104.5.XXX]:2266' (ECDSA) to the list of known hosts.

I have been dealing with this issue the last few weeks. Any pointer about what could be causing this?

0xtavian commented 3 years ago

@niemand-sec @303sec Linode and DO have rate limits applied when spinning up VMs. IIRC you can only spin up in batches of 15. So if you want to spin up a larger fleet, and you've already increased your max allowed Linodes/Droplets by asking support nicely for an increase, you have to spin up max 15 instances at a time.

For example, if you wanted to spin up 30 instances, you'd run axiom-fleet fire -i=15 and then wait for all 15 to provision and then run axiom-fleet fire -i=15 again.