Closed 0xhunster closed 2 years ago
Hello, @0xhunster, thanks for the report. I have fixed several performance regressions that may be causing the problem that you describe, please can you check https://github.com/Edu4rdSHL/fhc/releases/tag/0.7.1 ? This should fix all your problems.
Edit: Please use 0.7.1
see here, I run the same command.
Weird, can you give me the outputs of the following command:
ulimit -a
ulimit -a
unknown option
That is a standalone command, not an fhc option. Just run that command on your shell.
$ ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
...
sorry for the misunderstanding.
Please append the following lines at the end of the /etc/security/limits.conf
file:
* soft nofile 102400
* hard nofile 102400
* soft nproc 102400
* hard nproc 102400
Then restart your computer/VM and launch the tool again. The issue is caused by the max open files directive which is only 1024, a really small number. In Linux every connection creates a file descriptor, fhc tries to perform many connections at a time and that's what leads to the problem.
Let me know if that fixes your problem.
could you give me a screenshot of your /etc/security/limits.conf
file?
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
# - NOTE: group and wildcard limits are not applied to root.
# To apply a limit to the root user, <domain> must be
# the literal username root.
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#
#* soft core 0
#root hard core 100000
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
#@student - maxlogins 4
# End of file
* soft nofile 102400
* hard nofile 102400
* soft nproc 102400
* hard nproc 102400
still the same issue.
What are the machine specs? RAM, processor. Also, how long is the list of hosts that you're trying to resolve?
I am using DO VPS here it is. and list of hosts 122845.
It should work without a problem, I'm trying to reproduce the issue but I'm not able to. I would love to discuss it with you on Twitter or Discord. If that is not possible, can you give me the hosts that you're trying to resolve or send the file to this email?
I don't know why, when I did probing a list, then showing this error.