Closed timobrembeck closed 1 year ago
I faced 2 issues:
Supported source and target languages for DeepL are not loaded in tests because of this condition, which is not met:
if "runserver" in sys.argv or "APACHE_PID_FILE" in os.environ
CMS loads supported languages at start-up. This means that the mocked server must be running before it.
I faced 2 issues:
Supported source and target languages for DeepL are not loaded in tests because of this condition, which is not met:
if "runserver" in sys.argv or "APACHE_PID_FILE" in os.environ
CMS loads supported languages at start-up. This means that the mocked server must be running before it.
What do you think about patch the ready()
function of the deepl app in tests similar to e.g. here?
I think we can set the supported target languages to a fixed list in tests - we don't want to interact with the real DeepL API...
we don't want to interact with the real DeepL API
I assumed we want to do it through the mocked DeepL API. But to implement it in such way, the mocked server should be started before the CMS.
What do you think about patch the ready() function of the deepl app
In this case we won't cover the load of supported languages.
I guess I can just set them directly then:
apps.get_app_config("deepl_api").supported_source_languages = ["de"]
I discovered another strange problem:
If I use AsyncClient(), tests are failing on this request:
event_tree = reverse(
"events",
kwargs={
"region_slug": region_slug,
"language_slug": target_language_slug,
},
)
response = await client.get(event_tree)`
Response code is 200, and response content is fine itself.
Error:
/home/tory/Integreat/integreat-cms/.venv/lib/python3.9/site-packages/django/db/models/fields/__init__.py:1358:DateTimeField Event.end received a naive datetime (2023-05-08 00:00:00) while time zone support is active.
What is particularly odd:
@timoludwig can you see any leads here? 🙈
@seluianova very strange indeed :thinking: Unfortunately, I don't have an idea what could cause this... :unamused:
I also don't know where the date is coming from, in theory the bulk action should only modify the translation objects and not the content objects. I don't see a place where Event.end
could be reset to a naive local datetime during DeepL translation...
Maybe, if events are the only problem, we can start with tests for pages and POIs and think about events later?
I don't see a place where Event.end could be reset to a naive local datetime during DeepL translation...
Actually this error occurs even before the DeepL translation
Maybe, if events are the only problem, we can start with tests for pages and POIs and think about events later?
I believe we can :)
So. Assuming that the problem is in AsyncClient(), I could make these tests synchronized by replacing async aiohttp_raw_server with sync pytest-httpserver for DeepL mock. (https://pypi.org/project/pytest-httpserver/#description) In this case all 3 tests (pages, events, pois) work fine.
@timoludwig do you have any objections if I make the summ_ai tests synchronized the same way for consistency?
So. Assuming that the problem is in AsyncClient(), I could make these tests synchronized by replacing async aiohttp_raw_server with sync pytest-httpserver for DeepL mock. (https://pypi.org/project/pytest-httpserver/#description) In this case all 3 tests (pages, events, pois) work fine.
Ok, seems reasonable.
@timoludwig do you have any objections if I make the summ_ai tests synchronized the same way for consistency?
The problem is that the SUMM.AI client only works in async context, so I'm not sure whether this would work. But maybe I'm missing something... However, since there is a major PR open for SUMM.AI (#1980), I'd suggest to wait until that PR is merged before proceeding with refactoring the SUMM.AI tests.
It's not ideal to have multiple dependencies for the same task, but I guess it's better than to invest hours of hours into debugging this problem :sweat_smile: And it's only a dev dependency...
Motivation
In order to provide a stable machine translation feature, we should add tests for DeepL, similar to like we did for SUMM.AI.
Proposed Solution
Alternatives
Fingers crossed :crossed_fingers: