pry0cc / axiom

The dynamic infrastructure framework for everybody! Distribute the workload of many different scanning tools with ease, including nmap, ffuf, masscan, nuclei, meg and many more!
MIT License
4k stars 622 forks source link

Axiom Fleet Initialization Failure #494

Closed xaeroborg closed 2 years ago

xaeroborg commented 2 years ago

I got this error while creating multiple axiom fleets. I updated and built last week but I didnt use it right away and I ran it today and got this error. Out of 3 fleets just the first one got created.

image

0xtavian commented 2 years ago

@xaeroborg thanks, you might need to run axiom-update again, you dont seem to have the latest version. however there were some small syntax changes last week. you can exclude the equal sign. Run axiom-fleet -h axiom-init -h and axiom-ssh -h to see the new help menus and syntax. https://github.com/pry0cc/axiom/issues/491

xaeroborg commented 2 years ago

@0xtavian The fleet creation worked like a charm. I then ran axiom-select rayaan* and all the three instances got selected. But when I ran axiom-scan against targets with Nuclei module 1) In the first run it couldnt SSH into the fleets created. 2) Only one of the fleet could finish the scan and it looked like it was stuck. I havent seen this behavior until the recent update. I just scanned a list of 23 targets and usually this would have been completed in the blink of an eye but this time it took like good 11mins to complete. image

0xtavian commented 2 years ago

@xaeroborg please try to recreate. its possible that one instance had an issue during provisioning and is taking longer than expected.

xaeroborg commented 2 years ago

@0xtavian As requested I have recreated the steps all the way from creating instance and then firing the instances to run a scan and it took 10mins and nothing was scanned. image

I then ran the scan but also added axiom-select rayaan* and then ran the same scan and this was the result image

But the thing that bothers me is that this behavior never happened prior to the update.

0xtavian commented 2 years ago

@xaeroborg this issue is very likely fixed by just waiting a few moments after initialization has completed before using the instances. Also make sure you dont have two fleets with the same names. I'll work on it this weekend. I see the issue and its that the instance simply hasnt finished provisioning, yet you are trying to use it. this is why its taking a long time to get results back, because SSH has to timeout. The cloud provider could also be temporarily slow, causing a delay in provisioning, it happens form time to time.

Before using a fleet, maybe try running axiom-exec id to make sure they are fully provisioned. In any case, I will make improvements this weekend.

xaeroborg commented 2 years ago

@0xtavian I completely agree to what you've mentioned above. 1) I have always made sure that I dont end up using redundant/same instance names 2) I also had the suspicion that the VPS service would have been down(eventhough I though it could be unlikely but from time to time these things happen if not always) 3) I will definitely use axiom-exec id after initializing the fleets and before deploying them to scan. Thank you for your prompt response. Have a wonderful weekend.

just an update that based on what you advised me above and everything worked like a charm. image image

Requesting you to add in wiki to advise users to perform an axiom-exec id just to make sure all the SSH configs have been added to the respective fleets initialized and then move ahead to the scanning. Any users who are trying to add fleets

stefan-stojanovic-s commented 2 years ago

Can confirm this issuee still keeps on happening. I am using linode. There is also a problem where instances get created but FAIL to boot power on. Again, I am not sure if this is just linode shenanigans or axiom problem

devcoinfet commented 2 years ago

I agree this just started happening to me today and the first term that came up led to you guys. The thing is I also am having intermittent boot issues via linode instances and i am aware of you guys figuring out it is the 5 machine limit with linode but my concern is I'm experiencing the same issues with boots as well as the strange 408's on a self coded ssh / paramiko setup via python so this is quite cool to see I'm not the only one hitting a wall spinning up linode anotehr note i had les than 5 provisioned and only had 2 spun up at a time what i am thinking this is may be a race condition against their infra in a not so good way but not throttling the creation of the fleet i am spinning mine up like i said via python sub process and i think this happened for the first time when i took the actual throttle off the creation of the fleet as u guys call it I'm so amazed by this tool way more advanced than what I'm trying

My thought is to add about a 5 second delay between creation of workers at least I'm seeing a better response from the linode-cli via remote docker instance this way

0xtavian commented 2 years ago

@devcoinfet glad we helped track down your issue as well!

My thought is to add about a 5 second delay between creation of workers at least I'm seeing a better response from the linode-cli via remote docker instance this way

I was thinking about this as well. I'll have to play around with it.