vnoelifant / cozmo-companion

1 stars 0 forks source link

Testing dialogue #37

Closed vnoelifant closed 7 months ago

vnoelifant commented 8 months ago

Learned a lot about mocking! I am going to take tomorrow to try debugging some issues but please do comment on anything that stands out as needing feedback. Thanks!

vnoelifant commented 8 months ago

@Zaubeerer I also figured out what was going on with this following error we saw last session below. In my env file if I have the env variable VOICE = config("VOICE", default="en-US_AllisonV3Voice") marvin gets confused thinking I am trying to use an Open AI voice setting that doesn't exist (https://platform.openai.com/docs/guides/text-to-speech). However I am currently not using Marvin's speech to text (although I plan to).

My current fix is commenting the VOICE env variable and just setting the VOICE parameter in the main module with a default value of the Watson Text to Speech voice option like so: VOICE = config("VOICE", default="en-US_AllisonV3Voice"). However I read you can also add a fixture to either force the env variable to the Marvin voice option or temporarily set it to the Marvin setting, and reset it back to the Watson option after. But I am not sure the best solution for testing. Let me know your thoughts!

Example Fixtures:

import pytest

@pytest.fixture(autouse=True)
def mock_speech_settings(mocker):
    mocker.patch('marvin.settings.openai.audio.speech.voice', 'echo')

import pytest
from decouple import config

@pytest.fixture(autouse=True, scope="session")
def override_voice_variable(monkeypatch):
    original_voice = config("VOICE")
    monkeypatch.setenv("VOICE", "echo")  # Set to a valid value for `marvin`
    yield
    monkeypatch.setenv("VOICE", original_voice)  # Reset after tests

=================================== ERRORS ==================================== __ ERROR collecting tests/test_assistant.py _____ venv\Lib\site-packages_pytest\runner.py:341: in from_call result: Optional[TResult] = func() venv\Lib\site-packages_pytest\runner.py:372: in call = CallInfo.from_call(lambda: list(collector.collect()), "collect") venv\Lib\site-packages_pytest\python.py:531: in collect self._inject_setup_module_fixture() venv\Lib\site-packages_pytest\python.py:545: in _inject_setup_module_fixture self.obj, ("setUpModule", "setup_module") venv\Lib\site-packages_pytest\python.py:310: in obj self._obj = obj = self._getobj() venv\Lib\site-packages_pytest\python.py:528: in _getobj return self._importtestmodule() venv\Lib\site-packages_pytest\python.py:617: in _importtestmodule mod = import_path(self.path, mode=importmode, root=self.config.rootpath) venv\Lib\site-packages_pytest\pathlib.py:567: in import_path importlib.import_module(module_name) ..\AppData\Local\Programs\Python\Python312\Lib\importlib\init__.py:90: in import_module return _bootstrap._gcd_import(name[level:], package, level)

:1387: in _gcd_import ??? :1360: in _find_and_load ??? :1331: in _find_and_load_unlocked ??? :935: in _load_unlocked ??? venv\Lib\site-packages\_pytest\assertion\rewrite.py:186: in exec_module exec(co, module.__dict__) tests\test_assistant.py:3: in from src.cozmo_companion.assistant import ( src\cozmo_companion\assistant.py:6: in import marvin venv\Lib\site-packages\marvin\__init__.py:1: in from .settings import settings venv\Lib\site-packages\marvin\settings.py:267: in settings = Settings() venv\Lib\site-packages\pydantic_settings\main.py:85: in __init__ super().__init__( venv\Lib\site-packages\pydantic_settings\main.py:85: in __init__ super().__init__( venv\Lib\site-packages\pydantic_settings\main.py:85: in __init__ super().__init__( venv\Lib\site-packages\pydantic_settings\main.py:85: in __init__ super().__init__( E pydantic_core._pydantic_core.ValidationError: 1 validation error for SpeechSettings E voice E Input should be 'alloy', 'echo', 'fable', 'onyx', 'nova' or 'shimmer' [type=literal_error, input_value='en-US_AllisonV3Voice', input_type=str] E For further information visit https://errors.pydantic.dev/2.5/v/literal_error =========================== short test summary info =========================== ERROR tests/test_assistant.py - pydantic_core._pydantic_core.ValidationError:... !!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!! ============================== 1 error in 0.88s =============================== Finished running tests!
vnoelifant commented 8 months ago

As discussed in our session, I challenged the overuse of mocking and suggested pytest.monkeypatch and mockito as alternative mocking frameworks.

Make sure to check them out but to always question whether you need to mock or if you instead could recreate the proper objects with fixtures etc.

Thank you @Zaubeerer I will take a deeper look and modify as needed. I am used to the whole flow of starting out as unit tests and then transitioning to integrated testing to test the real connections. So, if I am going to sometimes mock, and sometimes not mock, then is it alright if I have a mixture of unit and integration tests at this point for my app? A bit confused on that. If I have both types of tests should I separate unit tests in one testing module and integration tests in another?

I think I need to take a step back and do things step by step but here are a couple of additional questions that would help me with next steps:

@Zaubeerer Is it advisable I go through each functionality and ask the question, should this be mocked or not? Should I just focus on only one test, centered around checking that the AI responds according to expectations?

Zaubeerer commented 8 months ago

Thank you @Zaubeerer I will take a deeper look and modify as needed. I am used to the whole flow of starting out as unit tests and then transitioning to integrated testing to test the real connections.

This sounds good 👍

So, if I am going to sometimes mock, and sometimes not mock, then is it alright if I have a mixture of unit and integration tests at this point for my app? A bit confused on that.

Like I said, I would avoid mocks as much as possible, but sometimes they are needed. So, having a mix is totally OK, but always question whether the mock is needed.

If I have both types of tests should I separate unit tests in one testing module and integration tests in another?

I would always mirror the modules. You can have all kinds of tests in one file, in my opinion, as long as it mirrors the package structure. Naturally, you want to separate concerns so in most cases you have many unit tests and only some integration tests that test the interfaces.

I think I need to take a step back and do things step by step but here are a couple of additional questions that would help me with next steps:

@Zaubeerer Is it advisable I go through each functionality and ask the question, should this be mocked or not? Should I just focus on only one test, centered around checking that the AI responds according to expectations?

Given that we are rather creating a prototype or MVP, I would only write the tests that:

I like to do TDD, when I already know well what I want to develop. So, I would often first write the test and afterwards the function. However, I also often first create a streamlit prototype without any tests at all, maybe only some tests for the core functions (e.g. a unit test for a function that calculates energy demand based on input parameters).

Zaubeerer commented 8 months ago

Hope that helps?

vnoelifant commented 8 months ago

Hope that helps?

@Zaubeerer Yeah, very helpful thank you!!

vnoelifant commented 7 months ago

Minor changes merged. A Follow-up branch planned for next iteration of testing focused on testing fundamental functionality of crucial functions