oneaiguru / GenAICodeUpdater

0 stars 0 forks source link

DEPEDANCY tests snippets from mini #7

Open oneaiguru opened 5 days ago

oneaiguru commented 5 days ago

Here's a collection of code snippets corresponding to the list of test cases provided under the "list 2" category:

1. Module Discovery

a. File Traversal

  1. Test for Discovering Python Files
    
    import unittest
    from unittest.mock import patch
    from src.file_discovery import discover_files

class TestFileDiscovery(unittest.TestCase): @patch('os.walk') def test_discover_python_files(self, mock_walk): mock_walk.return_value = [ ('/some/path', (None,), ['file1.py', 'file2.txt', 'script.py']), ('/some/path/subdir', (None,), ['test.py', 'note.txt']) ] expected = ['/some/path/file1.py', '/some/path/script.py', '/some/path/subdir/test.py'] result = discover_files('/some/path') self.assertListEqual(result, expected)

@patch('os.walk')
def test_ignore_non_python_files(self, mock_walk):
    mock_walk.return_value = [
        ('/some/path', (None,), ['file1.txt', 'file2.docx', 'image.png']),
    ]
    expected = []
    result = discover_files('/some/path')
    self.assertListEqual(result, expected)

### 2. Dependency Parsing

#### a. Import Statement Analysis

1. **Test for Standard Import Statements**
```python
import unittest
from src.dependency_parser import parse_imports

class TestImportParsing(unittest.TestCase):
    def test_standard_imports(self):
        code = 'import os\nimport sys'
        expected = {'os', 'sys'}
        result = parse_imports(code)
        self.assertEqual(result, expected)

    def test_from_imports(self):
        code = 'from os import path\nfrom sys import platform'
        expected = {'os.path', 'sys.platform'}
        result = parse_imports(code)
        self.assertEqual(result, expected)

3. Dependency Mapping

a. Accurate Mapping

  1. Test for Mapping Dependencies
    
    import unittest
    from src.dependency_mapper import map_dependencies

class TestDependencyMapping(unittest.TestCase): def test_dependency_mapping(self): dependencies = { 'module1': ['module2', 'module3'], 'module2': ['module3'], 'module3': ['module1'] } expected = { 'module1': {'module2', 'module3'}, 'module2': {'module3'}, 'module3': {'module1'} } result = map_dependencies(dependencies) self.assertEqual(result, expected)


### 4. Visualization with Matplotlib

#### a. Graph Accuracy

1. **Test for Accurate Graph Representation**
```python
import unittest
from unittest.mock import patch
from src.visualizer import visualize_dependencies
import networkx as nx

class TestVisualization(unittest.TestCase):
    @patch('matplotlib.pyplot.show')
    def test_visualize_graph(self, mock_show):
        G = nx.DiGraph()
        G.add_edges_from([
            ('module1', 'module2'),
            ('module2', 'module3'),
            ('module3', 'module1')
        ])
        visualize_dependencies(G)
        self.assertTrue(mock_show.called)

5. Reporting Mechanism

a. Output Generation

  1. Test for Report Generation
    
    import unittest
    from src.report_generator import generate_report

class TestReportGeneration(unittest.TestCase): def test_generate_summary_report(self): dependencies = { 'module1': {'module2', 'module3'}, 'module2': {'module3'}, 'module3': {'module1'} } expected = "Total Modules: 3, Total Dependencies: 3" result = generate_report(dependencies) self.assertIn(expected, result)

oneaiguru commented 5 days ago

whole output


title: "ChatGPT" source: "https://chatgpt.com/c/671dd145-31c0-8010-ba4f-97d24f47dad3"

You:

I'm going to read a mix of my code from the task list. I need to figure out which code files I can delete and how I can structure the remaining code into a working project. To do that, give me a list of saws on the team that I can use to do this. Next, I want help with the following. I need to think about how to modify my automatic organodot assistant. So that in case some test fails more than 10 attempts to give it a run Goes to the next test, that is, it should carry a count after each run. And in case of transition it should follow the algorithm according to the tree, along which we move, trying to cover all tests and squash the move. Naturally, it will need this tree, and I want to include this tree in the industrialisation system so that it is always there. So please help me turn in a long list of all the tests. Firstly, in a shortened list, where there will be no performance tests, and no tests that are already implemented. As well as those tests that you find irrelevant to this project and on top of that, I'll need to give the assistant a rather large file test. and a list of exactly the tree to move through. It should also be clearly stated in the test list itself, so it makes sense to idealise it. You need to realise that you need to move in as simple steps as possible, because this is a very unadvanced language model, which is very bad at dealing with complexity. And it's important that we solve as many as possible, because there are tasks that are more complex. and to put off all the difficult tasks, yes, later on. please, let's make it so that first I will get from him the results of the current tests that we're seeing now because they're still being processed. And then, I'll tell him to work with the test peeps according to the case that will be the system instructions. By tree I mean the algorithm by which we move from simple to complex. It should say so there is literally one word at each step, At the end of any message from the assistant, I have to see what step we are on and that is also the importance for the assistant himself, because he will not remember himself, read the list of tests regularly and understand what step we are on, he will hardly be able to reach the end of the process or give all the tests he is capable of.

a comprehensive suite of test cases for the Module Dependency Analyzer and Visualizer. These tests are meticulously categorized to ensure exhaustive coverage of all functionalities, edge cases, and potential scenarios the tool might encounter.

1. Module Discovery

a. File Traversal

b. Module Identification

2. Dependency Parsing

a. Import Statement Analysis

b. Dynamic Imports

c. Handling Edge Cases

3. Dependency Mapping

a. Accurate Mapping

b. Handling Missing Modules

4. Visualization with Matplotlib

a. Graph Accuracy

b. Aesthetic and Readability

c. Customization Options

5. Reporting Mechanism

a. Output Generation

b. Real-time Feedback

6. Integration and Compatibility

a. Python Version Compatibility

b. Integration with Development Tools

8. Error Handling and Robustness

a. Unexpected Scenarios

b. Input Validation

9. Configuration and Customization

a. Custom Analysis Rules

10. Security Considerations

a. Safe Parsing

11. Documentation and Help

a. Help Commands

13. Code Standards and Quality

a. PEP 8 Compliance

15. Regression Testing

a. Bug Fix Verification

16. User Experience

a. Intuitive Interface

17. Maintenance and Extensibility

a. Ease of Adding New Features

21. Accessibility

a. Assistive Technologies

Current Step: Step 1 – Provided the shortened list of tests excluding performance tests, already implemented tests, and irrelevant tests.

Here's a collection of code snippets corresponding to the list of test cases provided under the "list 2" category:

1. Module Discovery

a. File Traversal

  1. Test for Discovering Python Files
    
    import unittest
    from unittest.mock import patch
    from src.file_discovery import discover_files

class TestFileDiscovery(unittest.TestCase): @patch('os.walk') def test_discover_python_files(self, mock_walk): mock_walk.return_value = [ ('/some/path', (None,), ['file1.py', 'file2.txt', 'script.py']), ('/some/path/subdir', (None,), ['test.py', 'note.txt']) ] expected = ['/some/path/file1.py', '/some/path/script.py', '/some/path/subdir/test.py'] result = discover_files('/some/path') self.assertListEqual(result, expected)

@patch('os.walk')
def test_ignore_non_python_files(self, mock_walk):
    mock_walk.return_value = [
        ('/some/path', (None,), ['file1.txt', 'file2.docx', 'image.png']),
    ]
    expected = []
    result = discover_files('/some/path')
    self.assertListEqual(result, expected)

### 2. Dependency Parsing

#### a. Import Statement Analysis

1. **Test for Standard Import Statements**
```python
import unittest
from src.dependency_parser import parse_imports

class TestImportParsing(unittest.TestCase):
    def test_standard_imports(self):
        code = 'import os\nimport sys'
        expected = {'os', 'sys'}
        result = parse_imports(code)
        self.assertEqual(result, expected)

    def test_from_imports(self):
        code = 'from os import path\nfrom sys import platform'
        expected = {'os.path', 'sys.platform'}
        result = parse_imports(code)
        self.assertEqual(result, expected)

3. Dependency Mapping

a. Accurate Mapping

  1. Test for Mapping Dependencies
    
    import unittest
    from src.dependency_mapper import map_dependencies

class TestDependencyMapping(unittest.TestCase): def test_dependency_mapping(self): dependencies = { 'module1': ['module2', 'module3'], 'module2': ['module3'], 'module3': ['module1'] } expected = { 'module1': {'module2', 'module3'}, 'module2': {'module3'}, 'module3': {'module1'} } result = map_dependencies(dependencies) self.assertEqual(result, expected)


### 4. Visualization with Matplotlib

#### a. Graph Accuracy

1. **Test for Accurate Graph Representation**
```python
import unittest
from unittest.mock import patch
from src.visualizer import visualize_dependencies
import networkx as nx

class TestVisualization(unittest.TestCase):
    @patch('matplotlib.pyplot.show')
    def test_visualize_graph(self, mock_show):
        G = nx.DiGraph()
        G.add_edges_from([
            ('module1', 'module2'),
            ('module2', 'module3'),
            ('module3', 'module1')
        ])
        visualize_dependencies(G)
        self.assertTrue(mock_show.called)

5. Reporting Mechanism

a. Output Generation

  1. Test for Report Generation
    
    import unittest
    from src.report_generator import generate_report

class TestReportGeneration(unittest.TestCase): def test_generate_summary_report(self): dependencies = { 'module1': {'module2', 'module3'}, 'module2': {'module3'}, 'module3': {'module1'} } expected = "Total Modules: 3, Total Dependencies: 3" result = generate_report(dependencies) self.assertIn(expected, result)



These snippets provide a starting point for implementing unit tests according to the specified test cases for each category. You can further develop and expand these tests based on your specific application structure and requirements.
### Dependency Mapping Code with Graph-Based Approach

python
import networkx as nx
from typing import Dict, Set

class HybridImportExtractorRecursive:
    """
    Hybrid AST and regex-based extractor with recursive AST node traversal.
    Designed to capture standard, conditional, and dynamic (importlib) imports within nested structures.
    """

    def __init__(self, file_content: str, current_path: str):
        self.file_content = file_content
        self.current_path = current_path
        self.imports = set()

    def extract_imports(self) -> Set[str]:
        """
        Combines recursive AST and regex methods to detect a comprehensive set of imports.
        """
        # Run recursive AST parsing
        self._extract_with_recursive_ast()

        # Run regex parsing for importlib calls
        self._extract_with_regex()

        # Filter out self-imports
        self.imports.discard(self.current_path.split('/')[-1].replace('.py', ''))

        return self.imports

    def _extract_with_recursive_ast(self):
        """
        Use a recursive AST visitor to capture imports within nested structures.
        """
        try:
            tree = ast.parse(self.file_content)
            self._visit_node_recursively(tree)
        except SyntaxError:
            pass  # Robust handling for syntax errors

    def _visit_node_recursively(self, node):
        """
        Recursively visit AST nodes to capture imports within nested structures.
        """
        if isinstance(node, ast.Import):
            for alias in node.names:
                self.imports.add(alias.name)
        elif isinstance(node, ast.ImportFrom):
            if node.module:
                self.imports.add(node.module)

        # Recursively handle nested structures (functions, classes, control flows)
        elif hasattr(node, 'body'):
            for child in node.body:
                self._visit_node_recursively(child)

    def _extract_with_regex(self):
        """
        Use regex to capture dynamic imports via importlib, especially for multi-level module paths.
        """
        # Regex for importlib pattern like importlib.import_module('module_a.submodule_b')
        dynamic_import_pattern = r"importlib\.import_module\(['\"]([a-zA-Z_][\w\.]*)['\"]\)"
        matches = re.findall(dynamic_import_pattern, self.file_content)
        for match in matches:
            self.imports.add(match)

# Main graph-based mapping function
def map_dependencies_graph_based(files: Dict[str, str]) -> Dict[str, Set[str]]:
    """
    Graph-based dependency mapping function to accurately handle deep dependency chains.
    Constructs a directed graph where each module is a node and each dependency an edge.
    """
    # Create a directed graph
    dependency_graph = nx.DiGraph()

    # Populate graph nodes and edges based on file contents
    for file_path, content in files.items():
        # Use the hybrid extractor to capture imports
        extractor = HybridImportExtractorRecursive(content, file_path)
        dependencies = extractor.extract_imports()

        # Add nodes and directed edges for each dependency
        for dep in dependencies:
            dependency_graph.add_edge(file_path, f"/project/{dep}.py")

    # Convert graph to dependency map for output
    dependency_map = {}
    for node in dependency_graph.nodes:
        # Capture all dependencies directly reachable from this node
        dependency_map[node] = set(dep for _, dep in dependency_graph.out_edges(node))

    return dependency_map

### Test Cases for Performance and Parsing

python
import unittest
import tracemalloc
import time

class GraphBasedPerformanceTests(unittest.TestCase):
    def test_deep_dependency_chain_handling(self):
        """
        Test handling of very deep dependency chains using the graph-based mapping approach.
        """
        # Create a deep dependency chain
        files = {f"/project/module_{i}.py": f"import module_{i+1}" for i in range(50)}
        files["/project/module_50.py"] = "import os"  # Terminate chain with a standard library import

        # Generate the dependency map with the graph-based function
        dependency_map = map_dependencies_graph_based(files)

        # Check that each module correctly maps to the next, ending with module_50 -> os
        for i in range(50):
            expected_dependency = {f"/project/module_{i+1}.py"} if i < 49 else {"/project/os.py"}
            self.assertEqual(dependency_map[f"/project/module_{i}.py"], expected_dependency)

    def test_large_dependency_map_performance(self):
        """
        Test performance of the dependency mapping function with a large dependency map.
        """
        # Generate a large, synthetic dependency map with inter-module dependencies
        files = {f"/project/module_{i}.py": f"import module_{(i+1) % 100}" for i in range(1000)}

        # Start tracking memory and time
        tracemalloc.start()
        start_time = time.time()

        # Run the dependency mapping function
        dependency_map = map_dependencies_graph_based(files)

        # Stop tracking memory and capture usage
        end_time = time.time()
        current, peak = tracemalloc.get_traced_memory()
        tracemalloc.stop()

        # Assert map is generated for all modules
        self.assertEqual(len(dependency_map), 1000)

        # Log performance results for review
        print(f"Execution time for large dependency map: {end_time - start_time:.2f} seconds")
        print(f"Peak memory usage: {peak / (1024 * 1024):.2f} MB")

########## /Users/m/Downloads/New Folder With Items/enhanced-dependency-parser.py ##########
import ast
import re
from typing import Set, Optional
from dataclasses import dataclass, field

@dataclass
class ImportNode:
    """Represents a single import statement with its context."""
    module_name: str
    is_dynamic: bool = False
    parent_context: Optional[str] = None
    line_number: int = 0
    in_try_block: bool = False

class HybridImportExtractorRecursive:
    """
    Enhanced hybrid AST and regex-based extractor with complete import detection.
    Handles standard imports, dynamic imports, conditional imports, and nested imports.
    """

    def __init__(self, file_content: str, current_path: str):
        self.file_content = file_content
        self.current_path = current_path
        self.imports: Set[ImportNode] = set()
        self.current_context: list[str] = []
        self.in_try_block = False

    def extract_imports(self) -> Set[str]:
        """Extract all imports using both AST and regex methods."""
        try:
            tree = ast.parse(self.file_content)
            self._visit_node_recursively(tree)
            self._extract_dynamic_imports()

            # Convert ImportNodes to string format
            return {self._normalize_import(node) for node in self.imports}
        except SyntaxError as e:
            logger.error(f"Syntax error in file {self.current_path}: {e}")
            return set()

    def _visit_node_recursively(self, node: ast.AST, parent_context: Optional[str] = None):
        """
        Recursively visit AST nodes to capture all types of imports.
        Handles nested contexts and try-except blocks.
        """
        # Track try block context
        if isinstance(node, ast.Try):
            old_try_state = self.in_try_block
            self.in_try_block = True
            for item in node.body + node.handlers:
                self._visit_node_recursively(item, parent_context)
            self.in_try_block = old_try_state
            return

        # Handle function and class definitions
        if isinstance(node, (ast.FunctionDef, ast.ClassDef)):
            self.current_context.append(node.name)
            context = '.'.join(self.current_context)
            for item in node.body:
                self._visit_node_recursively(item, context)
            self.current_context.pop()
            return

        # Process import statements
        if isinstance(node, ast.Import):
            for alias in node.names:
                self.imports.add(ImportNode(
                    module_name=alias.name,
                    parent_context=parent_context,
                    line_number=node.lineno,
                    in_try_block=self.in_try_block
                ))
        elif isinstance(node, ast.ImportFrom):
            if node.module:
                self.imports.add(ImportNode(
                    module_name=node.module,
                    parent_context=parent_context,
                    line_number=node.lineno,
                    in_try_block=self.in_try_block
                ))

        # Recursively process child nodes
        for child in ast.iter_child_nodes(node):
            self._visit_node_recursively(child, parent_context)

    def _extract_dynamic_imports(self):
        """
        Extract dynamic imports using enhanced regex patterns.
        Handles various importlib patterns and multi-level imports.
        """
        patterns = [
            # Standard importlib pattern
            r"importlib\.import_module\(['\"]([a-zA-Z_][\w\.]*)['\"](?:\s*,\s*['\"]([^'\"]*)['\"])?\)",
            # Variable-based import pattern
            r"importlib\.import_module\(([a-zA-Z_][\w\.]*)\)",
            # from_list pattern
            r"__import__\(['\"]([a-zA-Z_][\w\.]*)['\"](?:\s*,\s*(?:[^,]+,){3}\s*\[([^\]]+)\])?",
        ]

        for pattern in patterns:
            matches = re.finditer(pattern, self.file_content)
            for match in matches:
                module_name = match.group(1)
                # Handle variable-based imports
                if not (module_name.startswith('"') or module_name.startswith("'")):
                    # Try to resolve the variable if possible
                    resolved_name = self._try_resolve_variable(module_name)
                    if resolved_name:
                        module_name = resolved_name

                self.imports.add(ImportNode(
                    module_name=module_name,
                    is_dynamic=True,
                    line_number=self._get_line_number(match.start())
                ))

    def _try_resolve_variable(self, var_name: str) -> Optional[str]:
        """Attempt to resolve a variable name to its string value."""
        pattern = f"{var_name}\s*=\s*['\"]([a-zA-Z_][\w\.]*)['\"]"
        match = re.search(pattern, self.file_content)
        return match.group(1) if match else None

    def _get_line_number(self, char_pos: int) -> int:
        """Convert character position to line number."""
        return self.file_content.count('\n', 0, char_pos) + 1

    def _normalize_import(self, import_node: ImportNode) -> str:
        """Convert ImportNode to normalized string format."""
        return import_node.module_name

########## /Users/m/Downloads/New Folder With Items/dependency-parser-tests.py ##########
import unittest
from textwrap import dedent
from dependency_analysis.dependency_analysis import HybridImportExtractorRecursive

class TestEnhancedDependencyParsing(unittest.TestCase):
    def setUp(self):
        self.maxDiff = None

    def test_multi_level_dynamic_import(self):
        """Test detection of multi-level dynamic imports using importlib."""
        code = dedent("""
            import importlib

            # Direct multi-level import
            module_a = importlib.import_module('module_a.submodule_b')

            # Variable-based import
            module_path = 'module_c.submodule_d'
            another_module = importlib.import_module(module_path)

            # Complex import with from_list
            submodules = ['component1', 'component2']
            parent = 'module_e.submodule_f'
            for sub in submodules:
                importlib.import_module(f'{parent}.{sub}')
        """)

        extractor = HybridImportExtractorRecursive(code, 'test.py')
        imports = extractor.extract_imports()

        expected = {
            'module_a.submodule_b',
            'module_c.submodule_d',
            'module_e.submodule_f.component1',
            'module_e.submodule_f.component2'
        }
        self.assertEqual(imports, expected)

    def test_import_within_try_except(self):
        """Test detection of imports within try-except blocks."""
        code = dedent("""
            # Standard try-except import
            try:
                import module_x
            except ImportError:
                import module_y

            # Nested try-except
            try:
                try:
                    from module_z import component
                except ImportError:
                    import module_z
            except Exception:
                pass

            # Try-except with multiple imports
            try:
                import module_a
                import module_b
            except ImportError:
                import module_c
        """)

        extractor = HybridImportExtractorRecursive(code, 'test.py')
        imports = extractor.extract_imports()

        expected = {
            'module_x', 'module_y', 'module_z',
            'module_a', 'module_b', 'module_c'
        }
        self.assertEqual(imports, expected)

    def test_import_within_function(self):
        """Test detection of imports within functions and nested functions."""
        code = dedent("""
            def outer_function():
                import module_outer

                def inner_function():
                    import module_inner

                    def deepest_function():
                        import module_deepest
                        from package.subpackage import component

                return inner_function

            class MyClass:
                def method(self):
                    import module_method

                @staticmethod
                def static_method():
                    import module_static
        """)

        extractor = HybridImportExtractorRecursive(code, 'test.py')
        imports = extractor.extract_imports()

        expected = {
            'module_outer', 'module_inner', 'module_deepest',
            'package.subpackage', 'module_method', 'module_static'
        }
        self.assertEqual(imports, expected)

    def test_complex_nested_imports(self):
        """Test detection of imports in complex nested structures."""
        code = dedent("""
            try:
                def complex_function():
                    try:
                        import module_a

                        class InnerClass:
                            def __init__(self):
                                import module_b

                            def method(self):
                                try:
                                    import module_c
                                except ImportError:
                                    import module_d
                    except ImportError:
                        import module_e
            except Exception:
                import module_f

            # Dynamic import in nested structure
            def another_function():
                module_name = 'module_g.submodule'
                imported = importlib.import_module(module_name)
        """)

        extractor = HybridImportExtractorRecursive(code, 'test.py')
        imports = extractor.extract_imports()

        expected = {
            'module_a', 'module_b', 'module_c', 'module_d',
            'module_e', 'module_f', 'module_g.submodule'
        }
        self.assertEqual(imports, expected)

########## /Users/m/Downloads/New Folder With Items/graph-dependency-mapper (1).py ##########
import networkx as nx
from typing import Dict, Set, Optional, List, Tuple
from dataclasses import dataclass
import logging
from pathlib import Path

@dataclass
class DependencyNode:
    """Represents a node in the dependency graph with additional metadata."""
    module_path: str
    is_terminal: bool = False
    depth: int = 0
    dependencies: Set[str] = field(default_factory=set)

class GraphDependencyMapper:
    """
    Enhanced graph-based dependency mapper that correctly handles deep chains
    and terminal dependencies using NetworkX's directed graph capabilities.
    """

    def __init__(self, project_root: str):
        self.project_root = Path(project_root)
        self.graph = nx.DiGraph()
        self.terminal_nodes: Set[str] = set()
        self.logger = logging.getLogger(__name__)

    def map_dependencies(self, files: Dict[str, str]) -> Dict[str, Set[str]]:
        """
        Maps dependencies using an enhanced graph approach that properly handles
        terminal nodes and deep chains.

        Args:
            files: Dictionary mapping file paths to their contents

        Returns:
            Dictionary mapping each module to its direct dependencies
        """
        self._build_dependency_graph(files)
        self._identify_terminal_nodes()
        return self._generate_dependency_map()

    def _build_dependency_graph(self, files: Dict[str, str]):
        """Constructs the directed graph of dependencies."""
        for file_path, content in files.items():
            node_path = self._normalize_path(file_path)

            # Use the enhanced extractor to get all imports
            extractor = HybridImportExtractorRecursive(content, file_path)
            dependencies = extractor.extract_imports()

            # Add nodes and edges to graph
            self.graph.add_node(node_path, 
                              depth=0, 
                              content=content)

            for dep in dependencies:
                dep_path = self._resolve_import_path(dep, node_path)
                if dep_path:
                    self.graph.add_edge(node_path, dep_path)

    def _identify_terminal_nodes(self):
        """
        Identifies terminal nodes (nodes with no outgoing edges or stdlib imports)
        and marks them in the graph.
        """
        for node in self.graph.nodes():
            if self._is_terminal_node(node):
                self.terminal_nodes.add(node)
                self.graph.nodes[node]['is_terminal'] = True

    def _is_terminal_node(self, node: str) -> bool:
        """
        Determines if a node is terminal (stdlib import or no further dependencies).
        """
        # Check if it's a stdlib module
        if self._is_stdlib_module(node):
            return True

        # Check if it has no outgoing edges
        if self.graph.out_degree(node) == 0:
            return True

        return False

    def _generate_dependency_map(self) -> Dict[str, Set[str]]:
        """
        Generates the final dependency map, ensuring terminal dependencies
        are properly included.
        """
        dependency_map = {}

        for node in self.graph.nodes():
            # Get immediate dependencies
            direct_deps = set(self.graph.successors(node))

            # For each dependency, check if it's part of a chain
            resolved_deps = set()
            for dep in direct_deps:
                if dep in self.terminal_nodes:
                    resolved_deps.add(dep)
                else:
                    # Follow chain until terminal node if needed
                    terminal_deps = self._follow_chain_to_terminal(dep)
                    resolved_deps.update(terminal_deps)

            dependency_map[node] = resolved_deps

        return dependency_map

    def _follow_chain_to_terminal(self, start_node: str) -> Set[str]:
        """
        Follows a dependency chain until reaching terminal nodes.
        Uses depth-first search to handle branches.
        """
        terminal_deps = set()
        visited = set()

        def _dfs(node: str):
            if node in visited:
                return
            visited.add(node)

            if node in self.terminal_nodes:
                terminal_deps.add(node)
                return

            successors = list(self.graph.successors(node))
            if not successors:
                # No successors means this is effectively terminal
                terminal_deps.add(node)
                return

            for successor in successors:
                _dfs(successor)

        _dfs(start_node)
        return terminal_deps

    def _normalize_path(self, path: str) -> str:
        """Normalizes file paths for consistent comparison."""
        return str(Path(path).resolve().relative_to(self.project_root))

    def _resolve_import_path(self, import_name: str, importer_path: str) -> Optional[str]:
        """
        Resolves an import statement to an actual file path.
        Handles both absolute and relative imports.
        """
        try:
            if import_name.startswith('.'):
                # Handle relative imports
                parent_dir = Path(importer_path).parent
                import_parts = import_name.lstrip('.').split('.')
                resolved_path = parent_dir.joinpath(*import_parts)
            else:
                # Handle absolute imports
                import_parts = import_name.split('.')
                resolved_path = self.project_root.joinpath(*import_parts)

            # Check if it's a file or directory
            if resolved_path.with_suffix('.py').exists():
                return str(resolved_path.with_suffix('.py'))
            elif (resolved_path / '__init__.py').exists():
                return str(resolved_path / '__init__.py')

            # If not found, might be stdlib
            if self._is_stdlib_module(import_name):
                return f"stdlib/{import_name}"

            return None

        except Exception as e:
            self.logger.warning(f"Error resolving import {import_name}: {e}")
            return None

    @staticmethod
    def _is_stdlib_module(module_name: str) -> bool:
        """Checks if a module is part of the Python standard library."""
        import sys
        import importlib
        try:
            spec = importlib.util.find_spec(module_name.split('.')[0])
            return spec is not None and spec.origin is not None and 'site-packages' not in spec.origin
        except (ImportError, AttributeError):
            return False

########## /Users/m/Downloads/New Folder With Items/graph-dependency-tests.py ##########
import unittest
from pathlib import Path
from textwrap import dedent
from dependency_analysis.dependency_analysis import GraphDependencyMapper

class TestGraphDependencyMapper(unittest.TestCase):
    def setUp(self):
        self.test_dir = Path('test_project')
        self.test_dir.mkdir(exist_ok=True)
        self.mapper = GraphDependencyMapper(str(self.test_dir))

    def tearDown(self):
        import shutil
        shutil.rmtree(self.test_dir)

    def test_deep_dependency_chain(self):
        """Test handling of very deep dependency chains."""
        files = {}
        # Create a chain: module_1 -> module_2 -> ... -> module_50 -> os
        for i in range(1, 51):
            if i == 50:
                files[f'/project/module_{i}.py'] = 'import os'
            else:
                files[f'/project/module_{i}.py'] = f'import module_{i+1}'

        dependency_map = self.mapper.map_dependencies(files)

        # Verify each module maps to the correct dependency
        for i in range(1, 50):
            self.assertIn('stdlib/os', dependency_map[f'/project/module_{i}.py'])

        # Verify the last module maps directly to os
        self.assertEqual(
            dependency_map['/project/module_50.py'],
            {'stdlib/os'}
        )

    def test_complex_dependency_graph(self):
        """Test handling of complex dependency graphs with multiple paths."""
        files = {
            '/project/main.py': dedent("""
                import module_a
                import module_b
                import module_c
            """),
            '/project/module_a.py': 'import os',
            '/project/module_b.py': dedent("""
                import module_a
                import sys
            """),
            '/project/module_c.py': dedent("""
                import module_b
                import module_d
            """),
            '/project/module_d.py': 'import os, sys'
        }

        dependency_map = self.mapper.map_dependencies(files)

        expected = {
            '/project/main.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_a.py': {'stdlib/os'},
            '/project/module_b.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_c.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_d.py': {'stdlib/os', 'stdlib/sys'}
        }

        self.assertEqual(dependency_map, expected)

    def test_circular_dependencies(self):
        """Test handling of circular dependencies."""
        files = {
            '/project/module_a.py': 'import module_b',
            '/project/module_b.py': 'import module_c',
            '/project/module_c.py': 'import module_a'
        }

        dependency_map = self.mapper.map_dependencies(files)

        # Each module should map to the others in the circle
        expected_deps = {
            '/project/module_a.py',
            '/project/module_b.py',
            '/project/module_c.py'
        }

        for module in expected_deps:
            self.assertTrue(
                dependency_map[module].issubset(expected_deps),
                f"Circular dependency not properly handled for {module}"
            )

    def test_stdlib_terminal_dependencies(self):
        """Test correct handling of standard library terminal dependencies."""
        files = {
            '/project/utils.py': dedent("""
                import os
                import sys
                import json
                import datetime
            """)
        }

        dependency_map = self.mapper.map_dependencies(files)

        expected_stdlib = {
            'stdlib/os',
            'stdlib/sys',
            'stdlib/json',
            'stdlib/datetime'
        }

        self.assertEqual(dependency_map['/project/utils.py'], expected_stdlib)

    def test_nested_package_dependencies(self):
        """Test handling of nested package dependencies."""
        files = {
            '/project/pkg/__init__.py': 'from . import sub_a',
            '/project/pkg/sub_a/__init__.py': 'from . import module_x',
            '/project/pkg/sub_a/module_x.py': 'import os',
            '/project/main.py': 'from pkg.sub_a.module_x import func'
        }

        dependency_map = self.mapper.map_dependencies(files)

        # Main should ultimately depend on os through the chain
        self.assertIn(
            'stdlib/os',
            dependency_map['/project/main.py'],
            "Terminal dependency not properly tracked through package hierarchy"
        )

########## /Users/m/Downloads/New Folder With Items/graph-dependency-mapper.py ##########
import networkx as nx
from typing import Dict, Set, Optional, List, Tuple
from dataclasses import dataclass
import logging
from pathlib import Path

@dataclass
class DependencyNode:
    """Represents a node in the dependency graph with additional metadata."""
    module_path: str
    is_terminal: bool = False
    depth: int = 0
    dependencies: Set[str] = field(default_factory=set)

class GraphDependencyMapper:
    """
    Enhanced graph-based dependency mapper that correctly handles deep chains
    and terminal dependencies using NetworkX's directed graph capabilities.
    """

    def __init__(self, project_root: str):
        self.project_root = Path(project_root)
        self.graph = nx.DiGraph()
        self.terminal_nodes: Set[str] = set()
        self.logger = logging.getLogger(__name__)

    def map_dependencies(self, files: Dict[str, str]) -> Dict[str, Set[str]]:
        """
        Maps dependencies using an enhanced graph approach that properly handles
        terminal nodes and deep chains.

        Args:
            files: Dictionary mapping file paths to their contents

        Returns:
            Dictionary mapping each module to its direct dependencies
        """
        self._build_dependency_graph(files)
        self._identify_terminal_nodes()
        return self._generate_dependency_map()

    def _build_dependency_graph(self, files: Dict[str, str]):
        """Constructs the directed graph of dependencies."""
        for file_path, content in files.items():
            node_path = self._normalize_path(file_path)

            # Use the enhanced extractor to get all imports
            extractor = HybridImportExtractorRecursive(content, file_path)
            dependencies = extractor.extract_imports()

            # Add nodes and edges to graph
            self.graph.add_node(node_path, 
                              depth=0, 
                              content=content)

            for dep in dependencies:
                dep_path = self._resolve_import_path(dep, node_path)
                if dep_path:
                    self.graph.add_edge(node_path, dep_path)

    def _identify_terminal_nodes(self):
        """
        Identifies terminal nodes (nodes with no outgoing edges or stdlib imports)
        and marks them in the graph.
        """
        for node in self.graph.nodes():
            if self._is_terminal_node(node):
                self.terminal_nodes.add(node)
                self.graph.nodes[node]['is_terminal'] = True

    def _is_terminal_node(self, node: str) -> bool:
        """
        Determines if a node is terminal (stdlib import or no further dependencies).
        """
        # Check if it's a stdlib module
        if self._is_stdlib_module(node):
            return True

        # Check if it has no outgoing edges
        if self.graph.out_degree(node) == 0:
            return True

        return False

    def _generate_dependency_map(self) -> Dict[str, Set[str]]:
        """
        Generates the final dependency map, ensuring terminal dependencies
        are properly included.
        """
        dependency_map = {}

        for node in self.graph.nodes():
            # Get immediate dependencies
            direct_deps = set(self.graph.successors(node))

            # For each dependency, check if it's part of a chain
            resolved_deps = set()
            for dep in direct_deps:
                if dep in self.terminal_nodes:
                    resolved_deps.add(dep)
                else:
                    # Follow chain until terminal node if needed
                    terminal_deps = self._follow_chain_to_terminal(dep)
                    resolved_deps.update(terminal_deps)

            dependency_map[node] = resolved_deps

        return dependency_map

    def _follow_chain_to_terminal(self, start_node: str) -> Set[str]:
        """
        Follows a dependency chain until reaching terminal nodes.
        Uses depth-first search to handle branches.
        """
        terminal_deps = set()
        visited = set()

        def _dfs(node: str):
            if node in visited:
                return
            visited.add(node)

            if node in self.terminal_nodes:
                terminal_deps.add(node)
                return

            successors = list(self.graph.successors(node))
            if not successors:
                # No successors means this is effectively terminal
                terminal_deps.add(node)
                return

            for successor in successors:
                _dfs(successor)

        _dfs(start_node)
        return terminal_deps

    def _normalize_path(self, path: str) -> str:
        """Normalizes file paths for consistent comparison."""
        return str(Path(path).resolve().relative_to(self.project_root))

    def _resolve_import_path(self, import_name: str, importer_path: str) -> Optional[str]:
        """
        Resolves an import statement to an actual file path.
        Handles both absolute and relative imports.
        """
        try:
            if import_name.startswith('.'):
                # Handle relative imports
                parent_dir = Path(importer_path).parent
                import_parts = import_name.lstrip('.').split('.')
                resolved_path = parent_dir.joinpath(*import_parts)
            else:
                # Handle absolute imports
                import_parts = import_name.split('.')
                resolved_path = self.project_root.joinpath(*import_parts)

            # Check if it's a file or directory
            if resolved_path.with_suffix('.py').exists():
                return str(resolved_path.with_suffix('.py'))
            elif (resolved_path / '__init__.py').exists():
                return str(resolved_path / '__init__.py')

            # If not found, might be stdlib
            if self._is_stdlib_module(import_name):
                return f"stdlib/{import_name}"

            return None

        except Exception as e:
            self.logger.warning(f"Error resolving import {import_name}: {e}")
            return None

    @staticmethod
    def _is_stdlib_module(module_name: str) -> bool:
        """Checks if a module is part of the Python standard library."""
        import sys
        import importlib
        try:
            spec = importlib.util.find_spec(module_name.split('.')[0])
            return spec is not None and spec.origin is not None and 'site-packages' not in spec.origin
        except (ImportError, AttributeError):
            return False

########## /Users/m/Downloads/New Folder With Items/graph-dependency-tests (1).py ##########
import unittest
from pathlib import Path
from textwrap import dedent
from dependency_analysis.dependency_analysis import GraphDependencyMapper

class TestGraphDependencyMapper(unittest.TestCase):
    def setUp(self):
        self.test_dir = Path('test_project')
        self.test_dir.mkdir(exist_ok=True)
        self.mapper = GraphDependencyMapper(str(self.test_dir))

    def tearDown(self):
        import shutil
        shutil.rmtree(self.test_dir)

    def test_deep_dependency_chain(self):
        """Test handling of very deep dependency chains."""
        files = {}
        # Create a chain: module_1 -> module_2 -> ... -> module_50 -> os
        for i in range(1, 51):
            if i == 50:
                files[f'/project/module_{i}.py'] = 'import os'
            else:
                files[f'/project/module_{i}.py'] = f'import module_{i+1}'

        dependency_map = self.mapper.map_dependencies(files)

        # Verify each module maps to the correct dependency
        for i in range(1, 50):
            self.assertIn('stdlib/os', dependency_map[f'/project/module_{i}.py'])

        # Verify the last module maps directly to os
        self.assertEqual(
            dependency_map['/project/module_50.py'],
            {'stdlib/os'}
        )

    def test_complex_dependency_graph(self):
        """Test handling of complex dependency graphs with multiple paths."""
        files = {
            '/project/main.py': dedent("""
                import module_a
                import module_b
                import module_c
            """),
            '/project/module_a.py': 'import os',
            '/project/module_b.py': dedent("""
                import module_a
                import sys
            """),
            '/project/module_c.py': dedent("""
                import module_b
                import module_d
            """),
            '/project/module_d.py': 'import os, sys'
        }

        dependency_map = self.mapper.map_dependencies(files)

        expected = {
            '/project/main.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_a.py': {'stdlib/os'},
            '/project/module_b.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_c.py': {'stdlib/os', 'stdlib/sys'},
            '/project/module_d.py': {'stdlib/os', 'stdlib/sys'}
        }

        self.assertEqual(dependency_map, expected)

    def test_circular_dependencies(self):
        """Test handling of circular dependencies."""
        files = {
            '/project/module_a.py': 'import module_b',
            '/project/module_b.py': 'import module_c',
            '/project/module_c.py': 'import module_a'
        }

        dependency_map = self.mapper.map_dependencies(files)

        # Each module should map to the others in the circle
        expected_deps = {
            '/project/module_a.py',
            '/project/module_b.py',
            '/project/module_c.py'
        }

        for module in expected_deps:
            self.assertTrue(
                dependency_map[module].issubset(expected_deps),
                f"Circular dependency not properly handled for {module}"
            )

    def test_stdlib_terminal_dependencies(self):
        """Test correct handling of standard library terminal dependencies."""
        files = {
            '/project/utils.py': dedent("""
                import os
                import sys
                import json
                import datetime
            """)
        }

        dependency_map = self.mapper.map_dependencies(files)

        expected_stdlib = {
            'stdlib/os',
            'stdlib/sys',
            'stdlib/json',
            'stdlib/datetime'
        }

        self.assertEqual(dependency_map['/project/utils.py'], expected_stdlib)

    def test_nested_package_dependencies(self):
        """Test handling of nested package dependencies."""
        files = {
            '/project/pkg/__init__.py': 'from . import sub_a',
            '/project/pkg/sub_a/__init__.py': 'from . import module_x',
            '/project/pkg/sub_a/module_x.py': 'import os',
            '/project/main.py': 'from pkg.sub_a.module_x import func'
        }

        dependency_map = self.mapper.map_dependencies(files)

        # Main should ultimately depend on os through the chain
        self.assertIn(
            'stdlib/os',
            dependency_map['/project/main.py'],
            "Terminal dependency not properly tracked through package hierarchy"
        )

* * *