Closed bulaktm closed 1 year ago
Also (maybe too much to ask), but if the output could be saved in csv (if i provide a list of url on a txt file) and the csv output would show headers on the columns and url's on the rows (and show the output for each url missing a specific header)?
Hello,
Thanks for your suggestions!.
Take a look at this recent commit: https://github.com/rfc-st/humble/commit/f42344c6fdda9e910adb4753cbcd34b46b9b8891; the export file name now includes both the subdomain and the TLD.
About the other points, maybe in the future I will add to humble functionality to scan URLs taking as source a text file, in which there are several of them. For now I leave it in backlog.
An alternative option to what you propose would be (tested on Linux console), something similar to:
datasets=('https://google.com' 'https://tesla.com'); for dataset in "${datasets[@]}"; do python humble.py -u "$dataset" -o pdf; done
Thank you for your quick response. when you mentioned "datasets=('https://google.com' 'https://tesla.com')", I tried it but got error message. zsh: unknown file attribute: h I tried these scenarios and none of them worked. I most likely took your instructions verbatim. Please correct me where i made the mistake. humble + -o txt -r -u datasets=('https://google.com' 'https://yahoo.com') humble +o txt -u datasets=('https://google.com' 'https://yahoo.com')
Also, when you state "dataset in "${datasets[@]}";" what goes in the dataset field?
Sorry for all this newbie questions. trying to learn as I go.
Hi again!
The example:
datasets=('https://google.com' 'https://tesla.com'); for dataset in "${datasets[@]}"; do python humble.py -u "$dataset" -o pdf; done
must be on a single line in the console, and must be executed inside the directory where "humble.py" resides. This example:
1.- Defines 'datasets' with a list of the URLs to be analyzed. 2.- Iterates through that list, executing "humble.py" (with whatever parameters you want) against each of the above URLs. 3.- In this example, '$dataset' is the URL.
This is the output that i get on Kali Linux (with this same example, changing only "python" to "phyton3"):
Is your feature request related to a problem? Please describe. when saving the output to a txt file, it is being saved with generic name, therefore if i scan 2 wpengine sites (ex: sxq.wpengine.com and 123.wpengine.com), the file name that is being saved is the same (wpengine_header_date). So if i run scan on 5 wpenging sites, it keeps on overwriting the name of the file since its the same as prior scan.
Describe the solution you'd like Able to include a txt file that humble can use to scan a list of sites and save the output as txt/pdf/html file. Also, save the output as the full URL (123.wpengine.com_date) instead of generic (ex:wpendine_header_date)
Describe alternatives you've considered manually doing it
Additional context Add any other context or screenshots about the feature request here.