Closed LuemmelSec closed 2 months ago
Hi
Out of curiosity, did you load SH v1 data using BHCE? Looks like you are missing Domain node which happens when you load SH1 data with the current BH.
Woah good question. It is an older database that I used quite a lot. Could absolutely be the case that there is a mixture and also BHCE used to put data into there with their collectors. However, I saw that during the fetching of the data it also reported an error:
[105/149] [+]Requesting : Compromisable OUs
scope size : 936 | nb chunks : 80
16%|███████████████████████████████████████████████████████████▏ | 13/80 [05:26<28:03, 25.13s/it]
[!]{code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2407, clientId=21] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2405, clientId=9]} on NODE(86950), because holders of that lock are waiting for ForsetiClient[transactionId=2407, clientId=21].
Wait list:ExclusiveLock[
Client[2405] waits for [ForsetiClient[transactionId=2407, clientId=21]]]}
[!]multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 286, in executeParallelRequest
for record in tx.run(q):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/work/result.py", line 251, in __iter__
self._connection.fetch_message()
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_common.py", line 180, in inner
func(*args, **kwargs)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_bolt.py", line 658, in fetch_message
res = self._process_message(tag, fields)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_bolt4.py", line 326, in _process_message
response.on_failure(summary_metadata or {})
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_common.py", line 247, in on_failure
raise Neo4jError.hydrate(**metadata)
neo4j.exceptions.TransientError: {code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2407, clientId=21] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2405, clientId=9]} on NODE(86950), because holders of that lock are waiting for ForsetiClient[transactionId=2407, clientId=21].
Wait list:ExclusiveLock[
Client[2405] waits for [ForsetiClient[transactionId=2407, clientId=21]]]}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/__main__.py", line 77, in populate_data_and_cache
neo4j.process_request(neo4j, request_key)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 379, in process_request
result = self.parallelRequest(self, items)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 637, in parallelRequestLegacy
for _ in tqdm.tqdm(
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/istarmap.py", line 19, in <genexpr>
return (item for chunk in result for item in chunk)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 873, in next
raise value
neo4j.exceptions.TransientError: {code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2407, clientId=21] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2405, clientId=9]} on NODE(86950), because holders of that lock are waiting for ForsetiClient[transactionId=2407, clientId=21].
Wait list:ExclusiveLock[
Client[2405] waits for [ForsetiClient[transactionId=2407, clientId=21]]]}
that seems related. I will try to clear the database and set it up new.
The glitch with multiprocessing certainly doesn't help either. Though I am not sure why you are getting a deadlock because we tried to make sure that the code is bulletproofed against this (unless you did run two AD Miner instances concurrently on the same database but I guess that you would not do that, right ?).
You are correct. One instance only.
OK thanks. Then, it would be interesting to know if you can reproduce the bug. If so, there are actually two algorithms that handle the multiprocessing. So, as workaround, the second one can be used with the --cluster flag where you use a cluster of just one database.
Regarding the Domain topic, I can suggest to run "MATCH (d:Domain) return d.name" and check if you are missing Domain nodes (which, again, happens systematically when you load SH1 data with BHCE ingestor and if so then your only option is to load the data with the legacy GUI).
The query did not give back anything. Ended up deleting the whole neo4j installation and all databases. Working with fresh data now and AD-Miner is currently running. Let's see if it goes through smooth.
Similar errors this time with everything fresh and legacy only:
[-]Done in 6.19 s - 774 objects
[105/149] [+]Requesting : Compromisable OUs
scope size : 966 | nb chunks : 80
5%|████████████████▍ | 4/80 [02:50<53:52, 42.53s/it]
[!]{code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2112, clientId=7] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2115, clientId=18]} on NODE(30748), because holders of that lock are waiting for ForsetiClient[transactionId=2112, clientId=7].
Wait list:ExclusiveLock[
Client[2115] waits for [ForsetiClient[transactionId=2112, clientId=7]]]}
[!]multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 286, in executeParallelRequest
for record in tx.run(q):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/work/result.py", line 251, in __iter__
self._connection.fetch_message()
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_common.py", line 180, in inner
func(*args, **kwargs)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_bolt.py", line 658, in fetch_message
res = self._process_message(tag, fields)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_bolt4.py", line 326, in _process_message
response.on_failure(summary_metadata or {})
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/neo4j/_sync/io/_common.py", line 247, in on_failure
raise Neo4jError.hydrate(**metadata)
neo4j.exceptions.TransientError: {code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2112, clientId=7] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2115, clientId=18]} on NODE(30748), because holders of that lock are waiting for ForsetiClient[transactionId=2112, clientId=7].
Wait list:ExclusiveLock[
Client[2115] waits for [ForsetiClient[transactionId=2112, clientId=7]]]}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/__main__.py", line 77, in populate_data_and_cache
neo4j.process_request(neo4j, request_key)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 379, in process_request
result = self.parallelRequest(self, items)
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 637, in parallelRequestLegacy
for _ in tqdm.tqdm(
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/istarmap.py", line 19, in <genexpr>
return (item for chunk in result for item in chunk)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 873, in next
raise value
neo4j.exceptions.TransientError: {code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=2112, clientId=7] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=2115, clientId=18]} on NODE(30748), because holders of that lock are waiting for ForsetiClient[transactionId=2112, clientId=7].
Wait list:ExclusiveLock[
Client[2115] waits for [ForsetiClient[transactionId=2112, clientId=7]]]}
Weird. While this wont fix the bug, can you try with the --cluster option to make use of the other multi-processing algorithm ?
I can try. Endresult was the same:
okay, so with the --cluster option it errors out far more before:
[-]Done in 0.05 s - 0 objects
[31/149] [+]Requesting : Set dcsync=TRUE to nodes that can DCSync (GetChanges/GetChangesAll)
scope size : 87601 | nb chunks : 26740
Cluster participation:
: 0%| | 0/26740 [00:00<?, ?it/s][!][Errno 24] Too many open files
[!]Traceback (most recent call last):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/__main__.py", line 77, in populate_data_and_cache
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 377, in process_request
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 542, in parallelRequestCluster
File "/usr/lib/python3.10/multiprocessing/context.py", line 119, in Pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 215, in __init__
File "/usr/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static
File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
File "/usr/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 65, in _launch
OSError: [Errno 24] Too many open files
Cluster participation:
: 0%| | 0/26740 [00:02<?, ?it/s]
[32/149] [+]Requesting : Set dcsync=TRUE to nodes that can DCSync (GenericAll/AllExtendedRights)
scope size : 87601 | nb chunks : 26740
Cluster participation:
: 0%| | 0/26740 [00:00<?, ?it/s][!][Errno 24] Too many open files
[!]Traceback (most recent call last):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/__main__.py", line 77, in populate_data_and_cache
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 377, in process_request
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 542, in parallelRequestCluster
File "/usr/lib/python3.10/multiprocessing/context.py", line 119, in Pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 215, in __init__
File "/usr/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static
File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
File "/usr/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 65, in _launch
OSError: [Errno 24] Too many open files
Cluster participation:
: 0%| | 0/26740 [00:02<?, ?it/s]
[33/149] [+]Requesting : Get list of objects that can DCsync (and should probably not be to)
[-]Done in 0.08 s - 0 objects
[34/149] [+]Requesting : Set path_candidate=TRUE to candidates eligible to shortestPath to DA
[-]Done in 1.27 s - 0 objects
[35/149] [+]Requesting : Set ou_candidate=TRUE to candidates eligible to shortestou to DA
[-]Done in 0.48 s - 0 objects
[36/149] [+]Requesting : Set contains_da_dc=TRUE to all objects that contains a domain administrator
[-]Done in 0.41 s - 95 objects
[37/149] [+]Requesting : Set contains_da_dc=TRUE to all objects that contains a domain controller
[-]Done in 0.31 s - 25 objects
[38/149] [+]Requesting : Set is_da_dc=TRUE to all objects that are domain controller or domain admins
[-]Done in 0.11 s - 0 objects
[39/149] [+]Requesting : Set members_count to groups (recursivity = 5)
scope size : 7798 | nb chunks : 7798
Cluster participation:
: 0%| | 0/7798 [00:00<?, ?it/s][!][Errno 24] Too many open files
[!]Traceback (most recent call last):
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/__main__.py", line 77, in populate_data_and_cache
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 377, in process_request
File "/root/.local/share/virtualenvs/ADMiner-hgMaw5Gu/lib/python3.10/site-packages/ad_miner/sources/modules/neo4j_class.py", line 542, in parallelRequestCluster
File "/usr/lib/python3.10/multiprocessing/context.py", line 119, in Pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 215, in __init__
File "/usr/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool
File "/usr/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static
File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
File "/usr/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 65, in _launch
OSError: [Errno 24] Too many open files
Cluster participation:
: 0%| | 0/7798 [00:02<?, ?it/s]
[40/149] [+]Requesting : Set has_member=True to groups with member, else false
Can we chat in our Discord server as this will be more efficient ?
sure
Sorry, I had the same problem with a particular database and I was able to generate the report this way:
AD-miner -c -cf Report -u neo4j -p Password --cluster 127.0.0.1:7687:2
I don't know why, if I increase the number of cores I get exactly the same error as you. Best regards
The error for the collection is indeed gone if I give it :2 for the cluster. However, the error with computing the domain objects persists.
So, we are still trying to figure the multi-processing issue that causes a deadlock.
This pretty hard to investigate as we have not been able to reproduce the problem, despite trying on different set of data.
As a temporary workaround, we have modified the code so that if the OU cypher fails, then AD Miner will continue and eventually write the report. In that case, the control will show as unavailable (grey color) in the web interface.
We'll live this issue open until we figure the problem and fix it.
Closing this as the issue has been dealt with through Discord
Describe the bug After processing the data from the neo4j db, AD-Miner fails with an error when computing domain objects
This is similar to https://github.com/Mazars-Tech/AD_Miner/issues/88, despite me having neo4j version 4 running.
Terminal Output
Screenshots
System information
Additional context I ran the stuff twice. Once with directly collecting the data from the database and the 2nd time with the cached data. Both times same error.