Closed sharon-chiang closed 3 months ago
I'm not 100% sure what's going on here. delete_downstream_merge
...
That suggests that the graph DJ is using doesn't have access to all the same nodes that Spyglass does.
It would hep me debug if I knew...
SpikeSortingOutput
is imported before running?A known issue with delete_downstream_merge
is that the tables have to be loaded in order to be accessed, but, when that isn't the case, it typically just fails to find them at all, which results in the 'can't delete part' error we had before adding this step
import spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput
Error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[7], line 1
----> 1 import spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput
ModuleNotFoundError: No module named 'spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput'; 'spyglass.spikesorting.spikesorting_merge' is not a package
Please try
from spyglass.spikesorting.spikesorting_merge import SpikeSortingOutput
import spyglass.spikesorting.v0.spikesorting_recording as sgss
(sgss.SortGroup & {
'nwb_file_name': 'J1620210529_.nwb',
'sort_group_id': 100
}).cautious_delete()
Thanks Chris. That works to delete some, but yields this error:
Ok, I'll look into this. That looks like cautious_delete
did what it was supposed to, but the table structure results in some issues with DJ's delete process
EDIT: There's some kind of circularity going on during this delete
[2024-03-25 11:16:08,853][INFO]: Cascading `spikesorting_recording`.`sort_group`, 0/50
[2024-03-25 11:16:08,866][INFO]: Cascading `spikesorting_recording`.`sort_group__sort_group_electrode`, 0/50
[2024-03-25 11:16:08,886][INFO]: Deleting 58 from `spikesorting_recording`.`sort_group__sort_group_electrode`
[2024-03-25 11:16:08,887][INFO]: Cascading `spikesorting_recording`.`sort_group`, 1/50
[2024-03-25 11:16:08,899][INFO]: Cascading `spikesorting_recording`.`spike_sorting_recording_selection`, 0/50
[2024-03-25 11:16:08,912][INFO]: Cascading `spikesorting_recording`.`__spike_sorting_recording`, 0/50
[2024-03-25 11:16:08,925][INFO]: Cascading `spikesorting_artifact`.`artifact_detection_selection`, 0/50
[2024-03-25 11:16:08,938][INFO]: Cascading `spikesorting_artifact`.`__artifact_detection`, 0/50
[2024-03-25 11:16:08,959][INFO]: Deleting 5 from `spikesorting_artifact`.`__artifact_detection`
[2024-03-25 11:16:08,959][INFO]: Cascading `spikesorting_artifact`.`artifact_detection_selection`, 1/50
[2024-03-25 11:16:08,972][INFO]: Cascading `spikesorting_artifact`.`artifact_removed_interval_list`, 0/50
[2024-03-25 11:16:08,985][INFO]: Cascading `spikesorting_sorting`.`spike_sorting_selection`, 0/50
> delete_quick returns 0, resulting in loop
Similar issue here when trying to delete one entry from the Nwbfile table so I could re-insert the correct data from the same day:
NetworkXError: The node
lfp_merge
.l_f_p_output
is not in the graph.
Hi @xlsun79 - The missing node error can be solved by importing the table and attempting to rerun. If you see a 'max attempt' error even after importing, please port your error stack in the following format
<details><summary>Error stack</summary>
```python
# Stack here
Thanks @CBroz1 ! I imported all the merge tables and reran, which solved the table not in the graph error. I didn't ran into a max attempt error, but had the follows happening: Code:
nwb_file_name = "Lewis20240222_.nwb"
(Nwbfile() & {'nwb_file_name':nwb_file_name}).cautious_delete()
I have some updates from trying to debug the last error. I figured that the foreign key error may be due to the requirement to delete child tables before deleting the parent table. So I ended up trying to delete sgc.Session() but then got a different error:
Code:
(sgc.Session() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()
Hi @CBroz1 I was wondering if there's any solution to fix the error above when I was trying to delete my entry in the sgc.Session() table before being able to delete that from Nwbfile() so I could reinsert the correct data. Otherwise I wouldn't be able to analyze data from that day. Thank you!
In a possibly related case, a user reported that cascade failed on this recording table because it yielded an invalid restriction...
DELETE FROM `spikesorting_v1_recording`.`__spike_sorting_recording` WHERE ( (`nwb_file_name`="bobrick20231204_.nwb"))
I have been able to replicate the Unknown column
issue and discuss on DataJoint slack
Submitted as datajoint 1159
Merged into datajoint here: https://github.com/datajoint/datajoint-python/pull/1160
But not released. Should we close @CBroz1 ?
I get the below error when trying to delete from SortGroup.
Code:
Error stack
```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/classes/digraph.py:899, in DiGraph.successors(self, n) 898 try: --> 899 return iter(self._succ[n]) 900 except KeyError as err: KeyError: '`spikesorting_merge`.`spike_sorting_output`' The above exception was the direct cause of the following exception: NetworkXError Traceback (most recent call last) Cell In[8], line 2 1 import spyglass.spikesorting.v0.spikesorting_recording as sgss ----> 2 (sgss.SortGroup & {'nwb_file_name': 'J1620210529_.nwb', 3 'sort_group_id': 100}).cautious_delete() File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:452, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs) 449 if not force_permission: 450 self._check_delete_permission() --> 452 merge_deletes = self.delete_downstream_merge( 453 dry_run=True, 454 disable_warning=True, 455 return_parts=False, 456 ) 458 safemode = ( 459 dj.config.get("safemode", True) 460 if kwargs.get("safemode") is None 461 else kwargs["safemode"] 462 ) 464 if merge_deletes: File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:248, in SpyglassMixin.delete_downstream_merge(self, restriction, dry_run, reload_cache, disable_warning, return_parts, **kwargs) 245 restriction = restriction or self.restriction or True 247 merge_join_dict = {} --> 248 for name, chain in self._merge_chains.items(): 249 join = chain.join(restriction) 250 if join: File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner) 991 val = cache.get(self.attrname, _NOT_FOUND) 992 if val is _NOT_FOUND: --> 993 val = self.func(instance) 994 try: 995 cache[self.attrname] = val File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:172, in SpyglassMixin._merge_chains(self) 161 """Dict of chains to merges downstream of self 162 163 Format: {full_table_name: TableChains}. (...) 169 delete_downstream_merge call. 170 """ 171 merge_chains = {} --> 172 for name, merge_table in self._merge_tables.items(): 173 chains = TableChains(self, merge_table, connection=self.connection) 174 if len(chains): File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner) 991 val = cache.get(self.attrname, _NOT_FOUND) 992 if val is _NOT_FOUND: --> 993 val = self.func(instance) 994 try: 995 cache[self.attrname] = val File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:150, in SpyglassMixin._merge_tables(self) 147 merge_tables[master_name] = master 148 search_descendants(master) --> 150 _ = search_descendants(self) 152 logger.info( 153 f"Building merge cache for {self.table_name}.\n\t" 154 + f"Found {len(merge_tables)} downstream merge tables" 155 ) 157 return merge_tables File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:148, in SpyglassMixin._merge_tables.@CBroz1