mt7180 / quaigle

RAG-based LLM application: project to explore different llm-frameworks like llamaindex, marvin and langchain, other used frameworks: streamlit, fastapi
1 stars 1 forks source link

wip: tests on backend #44

Closed mt7180 closed 1 year ago

mt7180 commented 1 year ago

@Zaubeerer, I have a question: I am currently defining the test functions and realizing that some special cases are not covered by the code so far. How should I proceed? Implementing the fixes in this branch or letting the tests fail and implementing the adjustments in separate hot fix branches?

Zaubeerer commented 1 year ago

Depends on how much effort it is to fix :)

Most of the time it takes more time than expected :D

I suggest to create the tests, than comment out the parametrizations that currently fail and create follow-up issues for those :)

mt7180 commented 1 year ago

@Zaubeerer: Another question: I would like to use custom markers with pytest to define or disable test_functions which make ai calls. I was thinking about using a pytest.ini file and defining the markers and in vscode using a pytest configuration in launch.json with the args "-m" and "not ai_call". Do you think it is possible to use this configuration in vscode testing or should I start by just invoking each test separately ? Do I have to use settings.json to make vscode taking in the pytest configuration?

Zaubeerer commented 1 year ago

That is a nice idea, however I suggest to create a followup issue for this, so we can focus on the next app.

mt7180 commented 1 year ago

@Zaubeerer just for information, the current tests are all working now :tada: