NASA-AMMOS / slim

Software Lifecycle Improvement & Modernization
https://nasa-ammos.github.io/slim/
Apache License 2.0
24 stars 9 forks source link

Guide on Continuous Testing #144

Closed yunks128 closed 4 days ago

yunks128 commented 3 months ago

Purpose

Proposed Changes

Issues

Testing

The changes in this PR have been reviewed and validated through the following steps:

See live preview of changes: https://yunks128.github.io/slim/docs/guides/software-lifecycle/continuous-testing/

riverma commented 3 months ago

Nice progress @yunks128 - some thoughts:

riverma commented 2 months ago

@yunks128 - let's work with @drewm-jpl on the unity-sps repository to test out your latest guide and see where we can add value.

I'm thinking at a minimum, we could work through your guide and propose a PR with:

yunks128 commented 2 months ago

@yunks128 Next steps for unity-sps use case for continuous testing:

  1. Follow the continuous testing best practice guideline to generate TESTING.md test plan, unit test & system test code, and pre-commit hooks. Gather information from existing test code (https://github.com/yunks128/unity-sps/tree/develop/unity-test)
  2. Gather high-level testing descriptions from @drewm-jpl and generate tests using continuous testing guide recommendations (i.e. LLMs) and compare results against @drewm-jpl's hand-written tests for validation.
  3. Gather testing automation specifics (i.e. pre-commit hooks, gh actions) @drewm-jpl has implemented for unity-sps, and infuse recommendations back into continuous testing guide where applicable

Refer to: unity-sps unity-sps api test

sjlewis-jpl commented 1 month ago

I would suggest explicitly mentioning docstrings for test functions, with guidance on what to include. In my experience, these are often neglected in test code, but are no less important. Here's some suggestions for what to include in test function docstrings:

- When related to a bug fix, call this out (along with a ticket ID, if applicable).  See example below.
    - Other relevant IDs can be included, such as for Change Requests, Requirements (if this test is used as a verification), Anomaly Reports, etc.
    """
    Tests to prove a bug fix in the function (ref BUG-REPORT-1234).
    This function should raise an error if the two input vectors are the same.
    Previously, it judged this by summing the elements of the difference vector.
    Now the function computes the magnitude of the difference vector, and 
    only raises an error if that magnitude is zero.
    This test calls the function with vectors that used to erroneously return errors,
    and will demonstrate that the errors are no longer raised.
    """

- 
riverma commented 1 month ago

I would suggest explicitly mentioning docstrings for test functions, with guidance on what to include. In my experience, these are often neglected in test code, but are no less important. Here's some suggestions for what to include in test function docstrings:

  • Describe the purpose of each test (rather than relying on interpreting the test function's name, or reading through the test code itself). For example: """Test for when the direction vector has zero magnitude."""
  • Say where to find the function being tested (might be more appropriate for a test module docstring, or class docstring). Not always necessary, but can be useful, especially if there's inherited methods, or if tests for multiple modules are combined into a single test file. For example:
class ProjectPoint(unittest.TestCase):
    """
    Unit tests for the `projectPoint` function in the `computeTime.py` module.
    """
    def test_projectPoint_2D(self):
        """Test with 2D vectors."""
  • When related to a bug fix, call this out (along with a ticket ID, if applicable). See example below.

    • Other relevant IDs can be included, such as for Change Requests, Requirements (if this test is used as a verification), Anomaly Reports, etc.
        """
        Tests to prove a bug fix in the function (ref BUG-REPORT-1234).
        This function should raise an error if the two input vectors are the same.
        Previously, it judged this by summing the elements of the difference vector.
        Now the function computes the magnitude of the difference vector, and 
        only raises an error if that magnitude is zero.
        This test calls the function with vectors that used to erroneously return errors,
        and will demonstrate that the errors are no longer raised.
        """

This is a great suggestion @sjlewis-jpl. Good practice to help understand the test written beyond just having it. Definitely critical to assess if tests cover the intended scope.

@yunks128 - what do you think? Could we add modifications to prompts or a blurb about inline comments in tests?

riverma commented 1 month ago

@yunks128 some comments / questions from the recent talk you gave on this PR:

yunks128 commented 1 month ago

I would suggest explicitly mentioning docstrings for test functions, with guidance on what to include. In my experience, these are often neglected in test code, but are no less important. Here's some suggestions for what to include in test function docstrings:

  • Describe the purpose of each test (rather than relying on interpreting the test function's name, or reading through the test code itself). For example: """Test for when the direction vector has zero magnitude."""
  • Say where to find the function being tested (might be more appropriate for a test module docstring, or class docstring). Not always necessary, but can be useful, especially if there's inherited methods, or if tests for multiple modules are combined into a single test file. For example:
class ProjectPoint(unittest.TestCase):
    """
    Unit tests for the `projectPoint` function in the `computeTime.py` module.
    """
    def test_projectPoint_2D(self):
        """Test with 2D vectors."""
  • When related to a bug fix, call this out (along with a ticket ID, if applicable). See example below.
  • Other relevant IDs can be included, such as for Change Requests, Requirements (if this test is used as a verification), Anomaly Reports, etc.
        """
        Tests to prove a bug fix in the function (ref BUG-REPORT-1234).
        This function should raise an error if the two input vectors are the same.
        Previously, it judged this by summing the elements of the difference vector.
        Now the function computes the magnitude of the difference vector, and 
        only raises an error if that magnitude is zero.
        This test calls the function with vectors that used to erroneously return errors,
        and will demonstrate that the errors are no longer raised.
        """

This is a great suggestion @sjlewis-jpl. Good practice to help understand the test written beyond just having it. Definitely critical to assess if tests cover the intended scope.

@yunks128 - what do you think? Could we add modifications to prompts or a blurb about inline comments in tests?

@sjlewis-jpl I appreciate your suggestion! I will do my best to include inline comments in the test codes for the continuous testing guide. @riverma

yunks128 commented 1 month ago

I would suggest explicitly mentioning docstrings for test functions, with guidance on what to include. In my experience, these are often neglected in test code, but are no less important. Here's some suggestions for what to include in test function docstrings:

  • Describe the purpose of each test (rather than relying on interpreting the test function's name, or reading through the test code itself). For example: """Test for when the direction vector has zero magnitude."""
  • Say where to find the function being tested (might be more appropriate for a test module docstring, or class docstring). Not always necessary, but can be useful, especially if there's inherited methods, or if tests for multiple modules are combined into a single test file. For example:
class ProjectPoint(unittest.TestCase):
    """
    Unit tests for the `projectPoint` function in the `computeTime.py` module.
    """
    def test_projectPoint_2D(self):
        """Test with 2D vectors."""
  • When related to a bug fix, call this out (along with a ticket ID, if applicable). See example below.
  • Other relevant IDs can be included, such as for Change Requests, Requirements (if this test is used as a verification), Anomaly Reports, etc.
        """
        Tests to prove a bug fix in the function (ref BUG-REPORT-1234).
        This function should raise an error if the two input vectors are the same.
        Previously, it judged this by summing the elements of the difference vector.
        Now the function computes the magnitude of the difference vector, and 
        only raises an error if that magnitude is zero.
        This test calls the function with vectors that used to erroneously return errors,
        and will demonstrate that the errors are no longer raised.
        """

This is a great suggestion @sjlewis-jpl. Good practice to help understand the test written beyond just having it. Definitely critical to assess if tests cover the intended scope. @yunks128 - what do you think? Could we add modifications to prompts or a blurb about inline comments in tests?

@sjlewis-jpl I appreciate your suggestion! I will do my best to include inline comments in the test codes for the continuous testing guide. @riverma

@sjlewis-jpl Thanks again! Added the following:

We recommend adding inline comments in your tests to clarify the purpose of each test. These comments should include details on the function being tested, the test type (e.g., bug fix, change request, requirements validation, anomaly reports), and any relevant context. For example:

import unittest
from computeTime import projectPoint  

class TestProjectPoint(unittest.TestCase):
    """
    Unit tests for the `projectPoint` function in the `computeTime.py` module.
    """
    def test_projectPoint_2D(self):
        """
        Purpose: Test the `projectPoint` function with 2D vectors to ensure it
        correctly handles the case where the input vectors are the same.

        Function: `projectPoint(vector1, vector2)`

        Test Type: Bug fix validation (ref BUG-REPORT-1234)

        Description: This test validates that the function no longer raises
        an error when the input vectors are the same. Previously, it incorrectly
        raised an error by summing the elements of the difference vector. The 
        function has been fixed to compute the magnitude of the difference vector 
        and only raise an error if that magnitude is zero. This test provides
        input vectors that used to erroneously return errors and checks that
        the errors are no longer raised.
        """

        # Test case with 2D vectors that are the same
        vector1 = [1, 2]
        vector2 = [1, 2]

        # Call the function and assert that it does not raise an error
        try:
            projectPoint(vector1, vector2)
        except ValueError:
            self.fail("projectPoint() raised ValueError unexpectedly with identical vectors")

        # Test case with 2D vectors that are different
        vector1 = [1, 2]
        vector2 = [2, 3]

        # Call the function and assert that it completes successfully
        try:
            projectPoint(vector1, vector2)
        except ValueError:
            self.fail("projectPoint() raised ValueError unexpectedly with different vectors")

if __name__ == '__main__':
    unittest.main()
yunks128 commented 4 days ago

@riverma This is quite strange! The link is not broken! The link works well online as well as locally. I updated the docusaurus.config.js file to set the onBrokenLinks configuration to 'warn' to bypass that error. Would this be ok? https://yunks128.github.io/slim/docs/guides/software-lifecycle/continuous-testing/testing-frameworks/#for-test-code-generation

riverma commented 4 days ago

@riverma This is quite strange! The link is not broken! The link works well online as well as locally. I updated the docusaurus.config.js file to set the onBrokenLinks configuration to 'warn' to bypass that error. Would this be ok? https://yunks128.github.io/slim/docs/guides/software-lifecycle/continuous-testing/testing-frameworks/#for-test-code-generation

Hey @yunks128 - we did in fact have a broken link. Search for the "Here's our recommended approach to deciding the right model for you to use (see our full list of recommended code generation models):" text within the guide, as below. Screenshot 2024-06-26 at 3 15 58 PM

I think we want to keep broken hyperlinks as build failures. Otherwise the SLIM website will be littered with bad links. Could you revert your change to docusaurus.config.js?