Closed exploide closed 4 years ago
You bring an important point. Thanks for reporting it. I will look into it and look for some options. I tested on some real servers with around 200GB and it takes several minutes but it looked acceptable. However I bet it takes a lot in bigger servers.
I might look into a way to fine tune the finds or even a way to exclude filesystems. I am thinking in cases where a server has an special filesystem for data with terabytes of it.
Let me check some options and I will post back here.
I've been thinking about this and I think that adding an option to exclude filesystem paths should help a lot with this.
I am imagining that the server where you run this might have some mount point with terabytes of data or maybe some network mount. These would slow the find /
calls a lot. So I am thinking that if we add something like --exclude-path /mnt/nfs_data,/data,/mnt/backups
so the find
commands skip those paths, the execution should be very fast.
I am thinking about this option because of 3 reasons.
@exploide What do you think about this option? Do you think it would solve your problem?
Sounds reasonable :+1: I would test it when I'm in need for this the next time. Don't know when this happens, though.
I added the -e
option to exclude paths from the scan in a separate branch. Please, feel free to try it: https://raw.githubusercontent.com/diego-treitos/linux-smart-enumeration/fix%2320/lse.sh
Looks good. Currently I have no system at hand where I would need this option, but as far as I see, this could have solved my problem I had the last time.
Though, one needs to figure out manually, what paths take ages and need to be excluded. But that should be manageable.
I'm not sure about calling this option -e
. Maybe one day you want to implement an exclude selection switch. But I see you like these short options ;)
@exploide I have just merged the changes into the main branch. Before that, I made a lot of optimizations to improve performance. I found that when the user has many writable files somewhere, the speed of some checks was dramatically increased.
However, tuning the tests I was able to decrease the time of some tests from several hours to just several seconds when the user can write in thousands of files outside its home (I tested adding an external harddrive and adding thousands of files there).
In this situation, after the optimizations, lse
finished in just 8 minutes compared to the hours it could take before. After using the -e
flag to exclude that path, it finished in 2 minutes.
Regarding the paths, if you are a pentester you should know what you are doing :) so I am confident that a pentester will be able to find the problematic paths.
I like short options yes. I'd rather type -e
than --exclude-paths
. If in the future I need to add other exclude option I still have -E
, -x
and -X
:P
I am closing the issue, as it should be solved now.
Thanks a lot for reporting this!!
Awesome :+1: Can't wait to try it next time :wink:
On real (non-CTF) systems, lse.sh can be extremely slow, especially when the host's filesystem contains a huge amount of data.
To skip tasks that seem to hang for hours, currently one needs to specify all remaining tasks via
-s
. This is a bit cumbersome for such scenarios. I would like to propose an exclude flag, that allows to define which tasks to skip.But looking at the list of tasks, this would still require a large amount of tasks to exclude manually. So alternatively (or better additionally?), an option like
--skip-long-running-tasks
(or--fast
) could be useful. Such an option would skip a predefined list of tasks that take too long when the filesystem is large. (I guess basically everything that doesfind / ...
)What do you think?