shmilylty / OneForAll

OneForAll是一款功能强大的子域收集工具
GNU General Public License v3.0
8k stars 1.27k forks source link

Project dependencies may have API risk issues #269

Closed PyDeps closed 1 year ago

PyDeps commented 2 years ago

Hi, In OneForAll, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

beautifulsoup4==4.9.3
bs4==0.0.1
certifi==2020.12.5
chardet==4.0.0
colorama==0.4.4
dnspython==2.1.0
exrex==0.10.5
fire==0.4.0
future==0.18.2
idna==2.10
loguru==0.5.3
PySocks==1.7.1
requests==2.25.1
six==1.15.0
soupsieve==2.2.1
SQLAlchemy==1.3.22
tenacity==7.0.0
termcolor==1.1.0
tqdm==4.59.0
treelib==1.6.1
urllib3==1.26.4
win32-setctime==1.0.3

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project, The version constraint of dependency future can be changed to >=0.12.0,<=0.18.2. The version constraint of dependency SQLAlchemy can be changed to >=0.9.0,<=1.4.37.

The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the future
parse.urlparse
The calling methods from the SQLAlchemy
Query.__init__
The calling methods from the all methods
ExtractResult
any_similar_html
utils.set_id_none
utils.check_response
raw_url.netloc.lower
iter
DNSDumpster
ip_reg.query
db.save_db
takeover.run
re.compile
result_dir.joinpath
text
Registry
get_cname
platform.machine.lower
save_subdomains
db.deduplicate_subdomain
threading.Lock
self.match
state.items
match.group
list_dns_resp.json
resp.headers.get
CSP
time.localtime
Query.__init__
self.get_mod
time.strftime
r.as_dict
self.decrease_num
registry.get_format
Enrich
Row
gen_subdomains
self.append
IPv4InfoAPI
gen_req_data
check_cname_keyword
requests.Session
self.get_conn
url_dict.append
parse.urlparse
AXFR
repos_r.json
utils.get_data
Collect
conn.bulk_query
self.names_queue.put
FoFa
self.values
utils.save_to_db
self.list_dns
Crawl.__init__
self.all
isp.append
raw_url.path.lower
any
addrs.append
inspect
TLD_EXTRACTOR
get_session
ipaddress.ip_network
self.match_location
self.queue.qsize
set
Chinaz
data.get
gen_word_subdomains
utils.init_table
io.open
database.close
self.check_param
utils.get_request_count
filter_url
logger.add
cname_str.split
ns_ip_list.append
rsp.json.get.lower
massdns_dir.joinpath
info.copy
QianXun
settings.data_storage_dir.joinpath
cnames.append
utils.load_json
ThreatMiner
ips_check.append
do_export
KeyError
utils.get_timestring
resp_queue.get
ctx.wrap_socket
self.memory_search
exc.ResourceClosedError
info.pop
len
self.subdomains.update
self.new_subdomains.add
os.environ.get
settings.module_dir.joinpath
self._conn.execute
json.load
resolver.query
find_in_resp
self.queue.join
ttls.append
parent_dir.mkdir
module.split
session.post
self.create_feature
self.extract_words
self.conn.bulk_query
path.joinpath
self.fill_queue
round
tqdm
url.SCHEME_RE.sub.partition.partition.partition.split.partition.strip
bs.find_all
resp.data.get
Thread
account_id_resp.json
utils.python_version
all_request_success
utils.get_main_domain
subdomain.count
infos.get
gen_result_infos
query_domain_ns
path.repr.replace
IPAsnInfo
Sogou
parts.copy
Robtex
Takeover
self.check
dns.zone.from_xfr
Gitee
self.login
domains.add
Hunter
_PublicSuffixListTLDExtractor
export_all_subdomains
self.save
domains.append
req_data.append
new_digit.str.zfill
db.drop_table
deal_output
urls.update
get_wildcard_record
stat_times
resolve.save_db
self.main
json.dumps
records.Database
row.values
subdomains.update
self.construct_eigenvector
get_ips
path.exists
Crtsh
gen_infos
label.encode
kwargs.get
self.export_data
wildcard.deal_wildcard
data_storage_dir.joinpath
db.get_data
VirusTotalAPI
PassiveDnsAPI
urls.add
query.run
dict_path.unlink
node.data.attrs.keys
addr.append
exrex.count
resp.text.replace
JSONFormat
self.answers_queue.empty
lock.release
self._get_tld_extractor
filter_name
match.group.strip.strip
self.post
rsp.json.get
Dataset
csv.writer
self.create_zone
check_ip_times
NetCraft
enumerate
save_thread.join
Converter
times.setdefault
delete_file
urls_www.update
property
cdn_cname_keyword.keys
database.export_data
get_timestring
all
get_db_path
tmp_parts.insert
utils.ip_is_public
subprocess.run
rows.as_dict
Anubis
ips_stat.items
thread.is_alive
db.close
utils.check_format
pp.isdigit
path.replace
get_pseudodistance
idna.decode
check_net
setattr
request.bulk_request
self._engine.dispose
get_proxy
asn_info.find
match_main_domain
ext
url.SCHEME_RE.sub.partition.partition.partition
IP138
urls.append
Tree
check_cdn_cidr
x.lower
bar.update
dns.query.xfr
utils.deal_data
enrich.run
utils.get_proxy
cdx_toolkit.CDXFetcher
record.get
utils.get_random_proxy
utils.get_sample_banner
word.startswith
db.get_data_by_fields
self.axfr
filter_subdomain
data_dir.joinpath
int
result.scalar
path.unlink
self._conn.close
get_resp
label.lower
check_cname_times
self.queue.put
check_valid_subdomain
cidr.append
LooseVersion
urls_queue.get
filtered_data.append
self.get_connection
requests.put
super
wildcard.is_valid_subdomain
get_from_targets
check_cdn_asn
js_urls.add
self.dom_tree.depth
zip
self._package
CSVFormat
module_path.rglob
create_engine
GoogleAPI
maybe_ip.isdigit
statements_list.append
HackerTarget
match.group.strip
asn_info.get
self.extract
hash
word.lower
self.conn.query
self.collect_funcs.append
Riddler
Sublist3r
dir
db.get_connection
IP_RE.match
self._validate
urls_queue.task_done
subdomains.add
table_name.replace
fire.Fire
logger.level
subname.replace
self.grab_loop
self.csp_header.get
self.check_brute_params
bytes
datas.extend
utils.check_version
now_data.copy
self.get_header
utils.match_subdomains
BruteSRV
cname_times.append
utils.check_random_subdomain
settings.result_save_dir.joinpath
s.lower
_reduce_datetimes
resp.json
BruteThread
search.run
self.results.append
resp_queue.empty
Record
platform.machine
tuple
get_fingerprint
bool
ip_info.get
subdomain.split
get_port_seq
ip_asn.find
self.names_queue.task_done
QuerySOA
self.compare
QueryNS
StopIteration
cdx.get_size_estimate
slice
run
dict_set.update
self.queue.empty
row.keys
export.export_data
BingAPI
find_subdomains
netloc.endswith
rsp.json
self.exist_table
ttl.append
threads.append
Database
get_progress_bar
self.return_data
self.modules.append
self.crawl
req_urls.add
self.__f.close
requests.get
cname_url.base64.b64encode.decode
resp_data.empty
sock.settimeout
format
Finder
type
utils.save_to_file
self._get_cached_tlds
itertools.chain
utils.get_massdns_path
dict_list.count
header.keys
record.as_dict
cname.lower
self.save_db
self.add_word
dict
pickle.keys
self.import_func
RecordCollection
registry.register_builtins
BinaryEdgeAPI
datetime.now
utils.check_dep
self.get_proxy
all_resolve_success
calc_pseudodistance
QueryTXT
gen_req_url
self.queue.task_done
secrets.token_hex
self.answers_queue.get
sql.self.query.scalar
tqdm.tqdm
self.init_database
self.increase_num
path.is_file
stream.seek
fuzz_string.lower
print
word.lower.split
converter.get_eigenvector
thread.join
self.names_queue.join
AlienVault
IpRegInfo.__init__
abs
exit
utils.remove_invalid_string
line.strip.lower
self.domain.encode
self.keys.index
wraps
os.cpu_count
ip.split
ShodanAPI
self.close
json.dump
random.choice
Yandex
self.calculate_weight
db.get_resp_by_url
item.to_text
result.get
CertSpotter
utils.looks_like_ip
self.domain.count
zf.extract
items.update
headers.get
self._engine.inspect.get_table_names
query_a_record
open
range
tenacity.retry
map
ssl.create_default_context
RiskIQ
to_detect_wildcard
enrich_info
lock.acquire
utils.export_all
frozenset
crawl.run
RapidDNS
self.config_param
relative_directory.joinpath
line.lower
TLDExtract
self.conn.close
hasattr
node_attr_list.append
bulk_request
read_target_file
self.recursive_descendants
Database.__init__
url.SCHEME_RE.sub.partition.partition.partition.split.partition
self.recursive_subdomain
tldextract.TLDExtract
name.zone.to_text
cls.export_stream_set
DOMTree
session.delete
html.base64.b64encode.decode
self.cookie.get
all_subdomains.extend
path.chmod
gen_new_info
CloudFlareAPI
domain.split
self.insert
new_data.append
self.check_loop
port.str.endswith
Settings
CommonCrawl
Sitemap
self.one
hp1.get_dom_structure_tree
cdx.iter
self._engine.connect
self.first
soup.find
self.keys.count
ChinazAPI
file.write
Search.__init__
self.rpush
name.lower
get_random_proxy
AttributeError
self.keys
fromkeys
ips.append
cname.lower.split
pathlib.Path
items.get
bak_table_name.replace
seen.add
IPv4Address
match_subdomains
request.run_request
settings.ports.get
re.sub
self.__f.read
NSEC
item.Domain.match
QuerySPF
resp_queue.task_done
random.randint
DNSdbAPI
self.dom_tree.create_node
line.strip
js_urls.update
db.remove_invalid
platform.system.lower
self._row.insert
So
max
self.replace_word
results.as_dict
data.decode
dataset._package
issubclass
result.get.split
utils.check_dir
isinstance
re.finditer
data.get.get
line.lower.strip
self.feature_hash
tablib.Dataset
brute.run
HTMLParser
resp.text.replace.replace
ip_times.append
settings.third_party_dir.joinpath
ValueError
socket.socket
check_format
self.filter
self.words.add
utils.dns_resolver
SecurityTrailsAPI
obj.isoformat
Domain
self.addr.format
results.all
self.query
resp_data.get
gc.collect
cursor.keys
urls_queue.qsize
asn.append
Lookup.__init__
temp_dir.joinpath
self.init_dict_path
url.SCHEME_RE.sub.partition.partition
find_in_history
fmt.export_set
results.scalar
db.update_data_by_url
self.wipe
self.match_subdomains
check_header_key
utils.get_url_resp
self.walk
Bing
SpyseAPI
Yahoo
create_zone_resp.json
Brute
ips.split
utils.check_path
utils.gen_fake_header
self.answers_queue.put
kwargs.setdefault
convert_to_dict
Path
logger.log
self.get_words
socket.inet_aton
self._all_rows.append
join
GithubAPI
get_random_header
list
self.worker
self.to_check
json.loads
srv.run
dir_path.exists
fuzz_string.isalnum
find_new_urls
export_all_results
ip_str.split
self.search
fingerprint.get
FullHuntAPI
db.insert_table
requests.post
Baidu
threading.Thread.__init__
wrap_sock.connect
output_path.unlink
word.endswith
collections.namedtuple
threading.Thread
get_jump_urls
StringIO
ip_to_int
utils.get_ns_path
data.append
subdomain.intersection
utils.clear_data
org.append
domain.Domain.extract
target.endswith
db.create_table
re.search
self.gen_result
text.get_html_title.strip
item.get
result_save_dir.joinpath
path.is_dir
gen_random_subdomains
OrderedDict
check_thread.start
save_to_file
zone.nodes.keys
split_domain
is_valid_subdomain
zipfile.ZipFile
subdomains.issubset
utils.call_massdns
target_domains.union
CrossDomain
appear_times.get
Altdns
stream.getvalue
Check.__init__
find_js_urls
self.results.export
save_thread.start
domain.Domain.registered
CertInfo
self.collect_subdomains
next
self.dom_tree.siblings
path.mkdir
sys.exit
convert_url
ip_address
url.SCHEME_RE.sub.partition
importlib.import_module
os.path.expanduser
answer.to_text
session.head
rows.export
row.pop
similarity.is_similar
platform.python_implementation
self.export
platform.python_version
urls_queue.put
struct.unpack
utils.dns_query
ipaddress.ip_address
IpRegData
getattr
resp.text.splitlines
self.gen_brute_dict
conn.query
utils.decode_resp_text
SiteDossier
exrex.generate
ext.subdomain.split
self.gen_new_subdomains
self.get_long
json.get
resolve.run_resolve
altdns.run
expression.replace
CeBaidu
subdomains.append
find_res.get
CensysAPI
subname.split
url.SCHEME_RE.sub.partition.partition.partition.split.partition.strip.rstrip
result.append
ZoomEyeAPI
Google
Ask
parent_dir.exists
self.infos.get
update_data
row.get
sorted
new_table_name.replace
subdomains_all.update
session.get
path.endswith
ThreatBookAPI
filter
_decode_punycode
self.finish
self.header.update
zones_resp.json
get_from_target
i.row.isoformat
urls_queue.empty
thread.start
self.save_json
self.queue.get
utils.is_subname
time.time
Exception
cache_file.read
utils.get_subdomains
resp_json.get
sleep
result.group
base64.b64encode
db_path.exists
get_html_title
query_domain_ns_a
self.have_api
url.SCHEME_RE.sub.partition.partition.partition.split
status.add
lowered.startswith
check_dict
self.redirect_match
collect.run
progress_thread.start
result.as_dict
wildcard.collect_wildcard_record
row_list.append
re.findall
Queue
dir_path.mkdir
info.get
callback
Robots
netloc.split
self.deal_answers
urls_queue.join
raw_url.scheme.lower
self.head
self.get_data
iscdn.do_check
_csv.writerow
m.d.int.str.zfill
Resolver
self.dataset.export
dns_resolver
resp.text.self.domain.re.search.group
check_path
save_brute_dict
platform.system
resp_queue.put
self.begin
self.names_queue.get
subnames.copy
gen_fuzz_subdomains
wrap_sock.getpeercert
tenacity.stop_after_attempt
self.dom_tree.get_node
wildcard.detect_wildcard
resp_json.get.get
str
utils.count_alive
self.datas.extend
settings.request_default_headers.copy
is_valid_flags.append
self.place.count
temp_list.append
SCHEME_RE.sub
x.get
Connection
hp2.get_dom_structure_tree
self._get_tld_extractor.suffix_index
r.search
datetime.now.strftime
public.append
json.get.get
db.count_alive
self.ip2long
req_thread_count
self.register
ips_stat.setdefault
utils.get_net_env
utils.get_domains
gen_fake_header
time.sleep
check_by_compare
domain.lower.strip
ips.update
answer.get
request_thread.start
CirclAPI
ttls_check.append
QueryMX
BeautifulSoup
label.encode.idna.decode.lower
self._data.insert
ArchiveCrawl
ip.isdigit
tenacity.wait_fixed
utils.get_timestamp
stat_appear_times
self.insert_word
info.items
domain.replace
Module.__init__
first.keys
queue.Queue
self.__next__
domain.lower
tokens.union
warnings.filterwarnings
isclass
self.dom_tree.size
repr
check.run
self.get
domain.db.get_data.as_dict
dict_pack
logger.remove
self.do_brute
is_exception
self.__f.seek
self.check_subdomains
VirusTotal
finder.run
Could please help me check this issue? May I pull a request to fix it? Thank you very much.
JrDw0 commented 1 year ago

Close the issue because this is not directly a bug.