Closed pdelteil closed 2 years ago
@pdelteil thanks, i think it could be related to https://github.com/pry0cc/axiom/issues/397. In the meantime, can you add -stats to the module and lmk if that fixes it for now?
Hi @0xtavian,
I already used the -stats flag.
I'm sorry, I didn't include the command:
axiom-scan file.txt -m nuclei -stats -si 180 -t /home/op/nuclei-templates/template.yaml
@pdelteil I noticed ServerAliveCountMax was missing from the SSH configs. I went ahead and added it in this branch https://github.com/pry0cc/axiom/tree/select-fix. Will push to master after testing. This should timeout in one hour if no data has been received from the server or if no data has been received from the client.
https://github.com/pry0cc/axiom/blob/master/images/provisioners/default.json#L214-L215 https://github.com/pry0cc/axiom/blob/select-fix/providers/do-functions.sh#L248-L249
SSH sessions arent intended to stay alive forever. SSH really isnt designed for executing long-running commands in the foreground as far as I understand it.
This block of code is responsible for running the commands over SSH and when that process returns it then runs axiom-scp to download the result.
One option might be to have the server (axiom instances) determine when their scan has finished and create a file in the scans working directory, such as /home/op/scan/$module-$date/scan_finished
. Then, periodically SSH into the instances, check for that flag and if its there download the results and merge.
During the periodic checking of the scan_finished flag, we could even download the available results at the time and only download new results every time after. Still working out the idea.
closing as this is a repeat of: https://github.com/pry0cc/axiom/issues/397.
I'm getting this error a lot recently.
Connection to 45.XX.YY.ZZ closed by remote host. All droplets are still running.
I'm doing a axiom-scan nuclei.