DHARPA-Project / kiara-website

Creative Commons Zero v1.0 Universal
0 stars 3 forks source link

How-to create tests for module development purposes in plugin templates? #19

Open MariellaCC opened 11 months ago

MariellaCC commented 11 months ago

Are there best practices to keep in mind or advice on tools to use to create tests while developing modules in plugin templates?

makkus commented 11 months ago

Good question. This is a first draft, and not finalized yet, so take the following with that in mind. Also, happy for any input on how to improve this, or other suggestions related to tests.

Basically, we have 2 types of tests. "Normal" unit tests, which go under the 'tests' folder in a project folder, just create those as you would create normal pytest tests.

Then there are sort of end-to-end module tests, which are supposed to make it easy to specify an operation and inputs, then test that the results are as expected. Technically, they are also run as pytest tests, and they are kicked of in the test_job_descs.py file under tests, so don't change that unless you know what you are doing.

It works like this:

  1. write a job description under examples/jobs

Those are yaml (or json) files with that look like:

operation: logic.and
inputs:
  a: true
  b: false

There are two mandatory keys:

At the moment, only scalar input types are supported, so in most cases you want to provide a pipeline that contains the operation you want to test, as well as other operations that create the inputs for that target operation from scalars. For example, this pipeline that imports csv files from a directory and converts it into a tables value:

pipeline_name: import.tables.from.csv_files
doc: |
  Create a tables value from a folder of csv files.

  Each file will represent a table in the database.

steps:
  - module_type: import.local.file_bundle
    module_config:
      include_file_types:
        - ".csv"
    step_id: import_csv_files
  - module_type: create.tables.from.file_bundle
    step_id: create_tables
    input_links:
      file_bundle: import_csv_files.file_bundle

input_aliases:
  import_csv_files.path: path

output_aliases:
  create_tables.tables: tables

Could be saves under pipelines/tables_from_csv_files.yaml, and then referenced in the job description like:

operation: "${this_dir}/../pipelines/tables_from_csv_files.yaml"
inputs:
  path: "${this_dir}/../data/journals/"

In this example I'm saving the job description as import_journal_tables.yaml. The ${this_dir} variable is the only one supported at the moment, and it gets resolved to the directory the job description file is in (the examples/jobs directory in this case).

This is already a basic form of testing now, kiara will run all of the jobs in that folder when you push to Github via Github Actions, and if one of the jobs fail, the GH action wll complain. So stuff like invalid/outdated input fields or processing errors inside the module will be caught.

The next step is to test the results against expected outputs. I will write about that later in another comment.

makkus commented 11 months ago

2.) write expressions to test the results

You can test your jobs manually with the commandline (or Python, but that's a bit more tedious in my opinion):

kiara run examples/jobs/import_journal_tables.yaml

This will tell you whether your job will run successfully in principle, and mimic what will happen in the Github Action when pushing.

In most cases we will also want to test that the result we are getting is actually correct, not just available. This is done by adding a folder that is named exactly like the job description file (without extension) to tests/job_tests, so import_journal_tables in our case. Then, we create a file outputs.yaml in that folder, and add property keys and expected values to it, as a dictionary. In our case, that file looks like:

tables::properties::metadata.tables::tables::JournalEdges1902::rows: 321
tables::properties::metadata.tables::tables::JournalNodes1902::rows: 276

The test runner will test the result value against the expected value in this dictionary. The dictionary keys are assembled like:

<OUTPUT_FIELD_NAME>::properties::<PROPERTY_NAME>::<REST_OF_PATH_TO_VALUE>

You can check result properies, available keys, etc in the cli:

kiara run --print-properties examples/jobs/import_journal_tables.yaml

(this will only work from kiara>=0.5.6 onwards)

This method of testing outputs does not support any non-scalar outputs, so in most cases testing properties is the only possible thing to do.

If you have a scalar result, you can test against it using a dictionary key like:

y::data: False

( <FIELD_NAME>::data: <PYTHON_REPRESENTATION_OF_VALUE>)

For more in-depth tests, you can do those in Python directly. For that, instead (or in addition to) outputs.yaml, add a file called outputs.py in the folder named after the job to test. Then add one or several functions (name those whatever you like) that can have one or several arguments, named after the result fields you want to test. So for example if the result field to test is named tables, you'd do:

from kiara.models.values.value import Value
from kiara_plugin.tabular.models.tables import KiaraTables

def check_tables_result(tables: Value):

    # we can check properties here like we did in the outputs.yaml file
    # for that you need to look up the metadata Python classes, which is something that
    # is not documented yet, not sure how to best do that
    assert tables.get_property_data("metadata.tables").tables["JournalEdges1902"].rows == 321

    # more interestingly, we can test the data itself
    tables_data: KiaraTables = tables.data

    assert "JournalEdges1902" in tables_data.table_names
    assert "JournalNodes1902" in tables_data.table_names

    edges_table = tables_data.get_table("JournalEdges1902")
    assert "Source" in edges_table.column_names
    assert "Target" in edges_table.column_names

As I said, the testing (esp. the checking of results) is a work in progress, but it works reasonably well so far. I'd still like to get some input or more validation that the solution I have is reasonable before locking it in.

makkus commented 11 months ago

One thing I forgot to mention, you can run the tests manually doing either:

pytest tests

or

make test
makkus commented 11 months ago

Ah, and another thing that is relevant in the context of generating documentation. I tried to design the testing around example job descriptions, because I think we also should render the examples in the plugin documentation itself. In what form exactly (yaml files, python API examples, ...) is up for us to decide, but they contain all the information (esp. if we add well written 'doc' fields to them) to create a useful examples section for each plugin (or in other parts of the documentation).

makkus commented 10 months ago

Some updates: to make writing those tests a bit more efficient, I've added support for a special 'init' example job description. This is a job description like described above, but it is run before any other job description invocation. This lets you prepare the kiara context in which the test runs with some example data (using the save key), which in turn can then be used in your specific test job.

For reference, have a look at the kiara_plugin.tabular plugin, for example this test:

Any questions, just ask.

MariellaCC commented 10 months ago

Thanks a lot for the info, I am now trying to experiment, and as a first step, I am recapping via a to-do. Concerning the 1st type of tests @makkus, you wrote "just create those as you would create normal pytest tests", but could you please elaborate: does this apply in a modules development scenario and if it does, could you point to an example that would be relevant for this kind of scenario?

If I understand right, the other insights in the current discussion item relate to the 2nd type of tests, which are the end-to-end module tests? (I will experiment on those first and it's possible that I will have additional questions once I am there)

In the current question, I am trying to make sure that I understand the added value of creating unit tests (pytest / 1st type of tests you mentioned) in addition to the end-to-end module tests (2nd type of tests you mentioned).

makkus commented 10 months ago

It's really not much different to when you write a normal Python application. You'd use pytests mainly to test utility functions, with different inputs & esp. edge cases. Since modules typically don't have a lot of code, you might not have any of those. End-to-end/integration tests are not that different really, and they are also run via pytest in our case, there is just this little framework around them that I described above to make it a bit easier. It's still important to test it with (meaningfully) different inputs & esp. edge cases.

MariellaCC commented 10 months ago

Concerning the support added for the special 'init' example job description, could you please specify from which version of Kiara? Thanks a lot

makkus commented 10 months ago

Should be available with the latest, current version (0.5.9).

MariellaCC commented 10 months ago

In the case of the current workflow I am working on, data are onboarded from external sources: https://github.com/DHARPA-Project/kiara_plugin.topic_modelling/blob/develop/src/kiara_plugin/topic_modelling/modules/onboarding.py does that change something in the testing approach you described above?

makkus commented 10 months ago

What was your previous testing approach? Basically, it just makes it easier to get data into the testing context, other than that nothing changes.

MariellaCC commented 10 months ago

well, so far we hadn't a testing procedure in place - this is precisely the scope of this thread..

makkus commented 10 months ago

Ah well, then nothing changes I guess.

You can still do your testing the same way it was possible before, but if you need pre-loaded data to test your module, this makes it easier because instead of your job refering to a pipeline, you can just use the input alias(es) of your 'init-ed' values in your job description, instead of writing a pipeline, refering to it in your job description, and use a local file path or url or whatever as job description input. For onboarding modules nothing would change, since those would not need such a pipeline and would have local paths/urls as job description inputs anyway.

As I said, the tabular plugin can serve as an example of how such tests can be specified.

MariellaCC commented 10 months ago

Ah well, then nothing changes, I guess.

My previous testing approach was to use and attach a Jupyter Notebook as I mentioned already in other threads, since Jupyter usage was, until now, my area of focus for Kiara usage as well as prototyping. This is a bit different now as these modules are now prepared to be used in a functional front-end app.

From what I understand in general, for our users who are module creators, we should prompt them to do the tests as you described: 1) unit tests, if necessary, according to use cases, and 2) init job descriptions.

As our users are not necessarily software engineers, we will need to document these tests in a user-friendly way, assuming no previous knowledge of testing processes.

makkus commented 10 months ago

Yes, I agree. Not sure who's going to do that, but ideally someone who is on the same level as our supposed target audience here, so they are aware of what information needs to be provided. I guess we can't really document everything concerning testing, it's a non-trivial area of work, so for the more fundamental stuff we have to find good tutorials/docs on the internet and link to them.

Anyone who's writing tests: make notes of the things that weren't clear to you or where you had difficulties when you wrote your first tests, so we can consider that for the content of this part of the documentation.

MariellaCC commented 10 months ago

Here's the result of my first experiment for this procedure (for an init job description test)

I tried adding a test for one single module by creating an init.yaml file in the examples/jobs directory of the plugin I am working on. Here's how my init.yaml file looks (I replaced module/inputs names by generic ones here)

operation: “operation_name”
inputs:
    operation_name__input_name_1: “input_value”
    operation_name__input_name_2: “input_value”
save:
    # operation_name__output_name: “saved_output_name” (this didn’t work for me when prefixing with op name)
    output_name: “saved_output_name” (this worked for me)

for running the operation:

this worked: kiara run init.yaml this didn’t work: make test

I had the following error when trying while being in the jobs directory: "make: *** No rule to make target `test'. Stop."

And from root directory of plugin, here's the error: "py.test make: py.test: No such file or directory make: *** [test] Error 1"

Is there anything I should do before trying a make test or is this only for unit tests maybe?

makkus commented 10 months ago

Right, it seems you don't have pytest installed.

You can do that is to pip install like:

pip install -e '.[dev_utils]'

In the project root. And yes, make commands always need to be run in the project root.

MariellaCC commented 10 months ago

oh, ok I assumed it was installed by default sorry, so, from what I understand, the command you wrote would be a first step for users who are using the plugin template and want to create tests.

pip install -e '.[dev_utils]'

(I was following this procedure for modules development https://dharpa.org/kiara.documentation/latest/extending_kiara/creating_modules/the_basics/#pre-loading-a-table-dataset )

makkus commented 10 months ago

Yes, correct. Those dependencies are not included in the default dependencies of a plugins dependencies, because if they were they would also be installed whenever an end-user installs it, which is something we don't want.

I guess this is one of the things that applies to any Python project, not just kiara plugins. So maybe we can find some tutorial or similar we can link to. Or we write our own recommended 'create a dev environment' doc section if we decide to have one.

MariellaCC commented 10 months ago

well the tutorial I pointed to above is meant for users extending Kiara (so module developers) so we just need to keep in mind to add this instruction when there will be an updated version, that's all

makkus commented 10 months ago

Right, yeah.

makkus commented 10 months ago

Concerning:

save:
    # operation_name__output_name: “saved_output_name” (this didn’t work for me when prefixing with op name)
    output_name: “saved_output_name” (this worked for me)

The only valid names here are the output fields of the operation that the job uses (in the 'operation' field). You can get the available ones via kiara operation explain <your_operation_name_or_path_to_pipeline>.

operation_name__output_name might coincidently be an actual valid name, but in most cases this is not a thing and would never work.

MariellaCC commented 10 months ago

thanks! so just recapping, because when I tried as documented in the experiment feedback above, I also did the same for the inputs (operation_name__input_name) so I would like to modify the generic example for single operation testing below:

correct version is:

operation: “operation_name”
inputs:
    input_name_1: “input_value”
    input_name_2: “input_value”
save:
    output_name: “saved_output_name” 

Please correct if there's a mistake.

If others try it, I think that it is worth noting that the input_name and output_name need to be exactly the same as kiara operation explain indicates, and that some examples may coincidently look like the syntax is operation_name__output_name, like the examples shared as a reference in this thread (https://github.com/DHARPA-Project/kiara_plugin.tabular/blob/develop/examples/jobs/init.yaml), but that it is a coincidence, both for inputs and outputs.

makkus commented 10 months ago

Yes, correct. One thing to note is that the 'operation' can also be a path to a pipeline file, in which case kiara will run the pipeline. Pipelines are just 'special' operations, and they also have a set of input- and output-fields.

Again, you can just use kiara operation explain <path_to_pipeline_file> to figure out what the output field names are. In case whoever wrote the pipeline did not adjust the input/output field names of the pipelines, the field names might look a bit like the long ones from above.

MariellaCC commented 10 months ago

Yes, absolutely, and worth noting indeed. At the moment, this first experiment was specifically targeted for a single operation testing scenario, I will recap the same way at a later stage for the pipeline scenario.

MariellaCC commented 9 months ago

Is there a way to explore (from the CLI) data items created/saved via the save key of an example/jobs/init.yaml file?

makkus commented 9 months ago

Not sure what you mean, you'd do kiara data list & kiara data explain <the_alias> after you ran the job?

MariellaCC commented 9 months ago

yes this is what I did but couldn't find the data item :'( it might mean that something is wrong with my pipeline, I will investigate and let you know if I can't solve it

makkus commented 9 months ago

Ok, it's basically the same as what the --save cli argument would do. It's a bit different for tests, since those use a different context that might get deleted after the test run, but as long as you run it in the default context (i.e. without -c <context_name>), the values should be saved in that default context. You should have also seen some output after the run command, telling you which values where saved, and under what aliases.

MariellaCC commented 9 months ago

Ok yes, of course, the context got deleted because I ran it via make test. Thank you, it is working now.

So to recap about creating a pipeline test this time, versus a single operation recapped previously: (please correct if anything is wrong)

1) create a pipeline in the examples/pipelines folder (e.g. named "init.yaml") the pipeline's syntax follows the normal instructions of the former tutorial: https://dharpa.org/kiara.documentation/latest/extending_kiara/pipelines/assemble_pipelines/

2) do a kiara operation explain your_test_pipeline_name to ensure correct pipeline input names

3) create an init.yaml file in the examples/jobs folder like the example below:

operation: "${this_dir}/../pipelines/init.yaml"
inputs:
  pipeline_input_name_1: "${this_dir}/../data/example_data"
  pipeline_input_name_2: "example_string_input"
  pipeline_input_name_3: ['example','list','input'] 
save:
  pipeline_output_name_1: "example_output_table_1"
  pipeline_output_name_2: "example_output_table_2"

When doing make test, the context is deleted after test has run. To have the data kept in store, the test needs to be run via: kiara run examples/jobs/init.yaml

MariellaCC commented 9 months ago

Should I get into the results testing now or should I wait for that? (I remember you said it is currently in development if I am not mistaken) PS: can't the results testing act as unit tests in a way?

makkus commented 9 months ago

To have the data kept in store, the test needs to be run via: kiara run examples/jobs/init.yaml

Yes. Basically that's is just running a job in your normal context, like you would run any other 'real' research job. If you wanted (for testing) to separate it from your default context, you could do something like:

kiara --context just_testing_manually run examples/jobs/init.yaml
# and then
kiara --context just_testing_manually data list
# or
kiara --context just_testing_manually run <some_other_job_that_needs_alias_from_init>

But I usually don't bother, and just delete the context whenever necessary.

2 other minor things I sometimes do when writing tests:

# instead of 'make test' you can run pytest manually:
pytest
# and then use all the options that pytest has, for example selecting a single test:
pytest -k create_tables_from_file_bundle
# or use '-s' so the print statements in your test cases aren't swallowed (that will become more important when you are testing the actual results using Python code)

Should I get into the results testing now or should I wait for that? (I remember you said it is currently in development if I am not mistaken)

Up to you. I've already started using it for the tabular plugin, like:

I can't guarantee that it will not change over time, but I'd guess that risk is fairly low atm since I have other stuff to do, and if the adjustments shouldn't be too bad to make broken tests work again. Also, it would be good to have someone else verify it's working, and maybe you have some suggestions about any of that, so in that sense, it wouldn't hurt if you added a few result tests.

That being said, just having your modules run with example data (just testing that it runs without error, not verifying results) is already better than most of what we did before, and should give us at least some level of assurance.

PS: can't the results testing act as unit tests in a way?

That's how it's implemented, basically. The kiara testing framework automatically runs the jobs it finds within the context of a unit test. If it also finds an 'outputs.yaml' or 'outputs.py' file in the right place, it will in addition test the results against those. So, in most cases I'd say if you test a few edge case datasets against your modules, that should be good enough in terms of test coverage. If something breaks in production, we'd fix and add a test for that specific case then.

If you have a more complicated plugin with helper methods/classes, it might be a good idea to test those separately, but I expect in most cases that won't be necessary. Again, just having those basic tests is already more than exists for a lot of research code out there.

MariellaCC commented 8 months ago

Where can I consult how the pipeline input and output names are formatted/named by kiara?

makkus commented 8 months ago

Sorry, I don't understand the question, can you give an example?

MariellaCC commented 8 months ago

This is something else but concerning the Kiara unit tests, I experimented them on my use case and it worked well. However, I didn't completely figure out how to get the right chaining process for that part: tables::properties::metadata.tables::tables::JournalEdges1902::rows: in my case it is for example: dist_table::properties::metadata.table::table::rows: 3 but I didn't find it very straightforward to find the chaining. Do you have any insights that may help retrieving the logic quite easily?

MariellaCC commented 8 months ago

Sorry, I don't understand the question, can you give an example?

Sure, to find about the correct pipeline inputs and outputs names (to create a test .yml file) I do a kiara operation explain on the pipeline name. I would like to be able to achieve that without having to use the CLI. Is it always pipeline_name__input_name for example? where could I check this info (naming convention for pipelines inputs done automatically by Kiara)?

I am adding an example below to illustrate input and output names :

operation: "${this_dir}/../pipelines/init.yaml"
inputs:
  load_text_files__path: "${this_dir}/../data/text_corpus/data"
  get_lccn_metadata__file_name_col: "file_name"
  get_lccn_metadata__map: [["2012271201","sn85054967","sn93053873","sn85066408","sn85055164","sn84037024","sn84037025","sn84020351","sn86092310","sn92051386"],["Cronaca_Sovversiva","Il_Patriota","L'Indipendente","L'Italia","La_Libera_Parola","La_Ragione","La_Rassegna","La_Sentinella","La_Sentinella_del_West","La_Tribuna_del_Connecticut"]]
  corpus_distribution__periodicity: "month"
  corpus_distribution__date_col_name: "date"
  corpus_distribution__title_col_name: "publication_ref"
save:
  create_table__table: "corpus_table"
  get_lccn_metadata__corpus_table: "augmented_table"
  corpus_distribution__dist_table: "dist_table"
makkus commented 8 months ago

Do you have any insights that may help retrieving the logic quite easily?

Yeah, this is where a mini framework like this pushes against what's feasable. What I do is to use

kiara data explain <the_value> --properties

and then manually chain the keys with :: until I'm at a value I'm interested in. But I'm happy for anybody to suggest a better way to specify this, I just couldn't think of something more intuitive that is still kinda useful in terms of testing. You could just ignore the declarative tests completely, and do all your tests in straight Python? It'd be much more powerful, and you can use all the debug tools you use in normal dev work to investigate values, classes, and their properties. The declarative tests are in no way mandatory, just a quick way to test simple values, but as I said, no reason to not just write Python code instead...

MariellaCC commented 8 months ago

I wanted to have the test suggested by a GPT and try to partially automatize (not sure possible, but I wanted to try it), but for that, using the CLI won't be possible. Alright, I see, not sure this is a good use case to try to partially automate this part.

makkus commented 8 months ago

Sure, to find about the correct pipeline inputs and outputs names (to create a test .yml file) I do a kiara operation explain on the pipeline name. I would like to be able to achieve that without having to use the CLI. Is it always pipeline_name__input_name for example? where could I check this info (naming convention for pipelines inputs done automatically by Kiara)?

That is only for when you don't specify input_aliases and/or output_aliases.

Example code for doing it in Python instead of cli would be:

# this is equivalent to `kiara operation explain
info = kiara.retrieve_operation_info('/home/markus/projects/kiara/kiara_plugin.tabular/examples/pipelines/init.yaml', allow_external=True)
# if you are only interested in the field names:
print(info.input_fields.keys())

# or get more pipeline-specific info
info = kiara.retrieve_pipeline_info('/home/markus/projects/kiara/kiara_plugin.tabular/examples/pipelines/init.yaml', allow_external=True)
# basically that gives you all the data that you could get via `kiara pipeline explain`, pipeline inputs would be available like:
print(info.pipeline_structure.pipeline_inputs_schema.keys())
MariellaCC commented 8 months ago

Great, thanks a lot for the info!

makkus commented 8 months ago

I wanted to have the test suggested by a GPT and try to partially automatize (not sure possible, but I wanted to try it), but for that, using the CLI won't be possible. Alright, I see, not sure this is a good use case to try to partially automate this part.

All the info the cli has is available via the API (since the cli uses the API itself), so whatever you need, you can get. Maybe read through all the endpoints in it (they are all documented, but if the docs are not enough or unintuitive don't hesitate to open a ticket). This file is 'user-facing', so I need help to get it right.

makkus commented 8 months ago

Another good way to discover the type of info you can get is by using your debugger in your IDE (VS Code?), and setting a breakpoint before print or whatever. Then dig though the runtime info, in my experience that's a good way to get familiar with Python classes, their attributes, etc.

MariellaCC commented 8 months ago

since the cli uses the API itself

Ok, I hadn't realized that. Then I definitely need to look closer into the API, thank you.

MariellaCC commented 7 months ago

Hi @makkus, I have a new follow-up question regarding tests: is it possible to have more than one job that gets tested automatically when doing make test? At the moment I create an init.yaml in the examples/jobs folder and with the 3 sections inside the init.yaml, namely operation, inputs and save. Is there a way to have more than one operation with each their own inputs and tested when doing make test?

makkus commented 7 months ago

I'm not 100% sure I understand the question, but technically the 'init.yaml' is not really a test itself (although tests will fail if this one doesn't work), but only there to setup values inside the test kiara context, which then in turn can be used in other tests.

So, for example, if you look in the tabular plugin:

Worth mentioning that the init.yaml thing is always run before each job/test (for every other job in jobs), in an empty temporarily created kiara context. This is a common pattern in testing, where you setup a predictable environment in which you can run your actual tests, and I tried to replicate that pattern for the specific situation where we want to test kiara modules. Often those need some actual values already onboarded and can't operate directly on 'file' values, so this way we don't have to write a whole pipeline for every one of those specific tests that for example expects a table value, if that makes sense?

To get an idea how this works, you could 'simulate' the testing that happens in the tabular plugin. You would check out that repo (and make sure you have a virtualenv that has the tabular plugin installed), then you would do:

The result of that last one would be the single column that was picked using the 'JournalNodes1902' table name (string), and the 'City' column (string), from the value that was stored under the alias 'journals_tables'. In my tests for this plugin, that result is than checked using this outputs.yaml file in the tests/job_tests/pick_column_from_tables folder.

The test framework that I wrote goes through all of the jobs in examples/jobs, and runs all of them. It doesn't use init.yaml as actual test, but it runs init.yaml before every job in examples/jobs, as I mentioned above. To prepare the context for your actual test. If you want to -- in addition to just run the job and make sure it doesn't fail -- also test/validate the job output, you have to create a folder with the same name as the job (in our example, it would be 'pick_column_from_tables' under tests/job_tests, and add an outputs.yaml or outputs.py (or both). If the test framework finds one of those, it will run the checks after it executed the job.

I appreciate that all this is a bit involved, but I think it's probably still easier than setting up your testing using 'raw' pytest. But you should feel free to do that if you are more comfortable with that. As long as the code gets tested, it doesn't really matter much how it's done. This is just a practice I think makes sense, and can save some time, but it's not the only way to do things.

MariellaCC commented 7 months ago

I see, yes it does make sense (and this testing framework is really helpful), thanks for re-clarifying the process, it helps.

I found it much easier like that than doing a raw pytest. I briefly tried to do a raw pytest for a single module, but I was not sure exactly how to get test data if I wanted to test a single module that uses the output of a previous module, so using the Kiara test framework was much easier. Although probably, I could manage like you said, by using the outputs saved via the init.yaml.

In the meantime, I have an example where I would need (if possible, but if not possible, no problem, it's not a crucial test, and I can skip it) to test that an error is thrown for a job in case there is an input error. Specifically in this use case, users need to choose between two optional inputs (of distinct types, they can either use the module with an array as an input, or with a table). So I thought that it may be a good idea to check that the job fails and right exception message is triggered if users use both optional inputs simultaneously (which they shouldn't do), but not sure that it would be needed.

To further illustrate the use case: the example is a tokenization module that accepts either a table or an array to perform the tokenization. So the two inputs are of different type and are optional.

MariellaCC commented 7 months ago

(But then in the use case mentioned above, maybe a best practice -but this is not test-related anymore- would be to have two distinct modules tokenize.corpus_table and tokenize.corpus_array even if a large part of the code would be similar.)

makkus commented 7 months ago

In the meantime, I have an example where I would need (if possible, but if not possible, no problem, it's not a crucial test, and I can skip it) to test that an error is thrown for a job in case there is an input error.

That's a very good point, and something I was thinking about too recently. Test frameworks typically have a way of testing for exceptions, but for the moment that is not supported for ours unfortunately. The one thing that I'm not sure about atm is how to 'mark' the test failures, I don't want to put jobs in the examples folder that are supposed to fail, because those also serve as, well, ...examples... for users, and having some that fail is probably confusing.

I guess the best way to do this is adding a second possible location for those jobs, maybe under tests/resources/jobs (or something), and maybe say: if the jobname contains 'fail' then only mark the unit test as passed if an exception is thrown? And maybe have a error.yaml and/or error.py similar to output.yaml that lets you also optionally check the error message? What do you think? Do you have any other ideas? More than happy to hear any kind of suggestions...

makkus commented 7 months ago

(But then in the use case mentioned above, maybe a best practice -but this is not test-related anymore- would be to have two distinct modules tokenize.corpus_table and tokenize.corpus_array even if a large part of the code would be similar.)

That depends. Would you only ever tokenize a single column for the table? In that case personally I'd probably only have a tokenize_corpus_array operation, and if users want to use it on a table, they'd need to use the table.pick.column operation first to select the column. That would be in line with the modular nature of kiara, anyway. But I don't really know all the context of your situation, so it might not be the best way to do it either, not sure.

MariellaCC commented 7 months ago

if the jobname contains 'fail' then only mark the unit test as passed if an exception is thrown?

That would already be great, even if there is no possibility of checking the message, as the most important thing is that the job fails in such a case. And if the message can be checked, even better, but having already the possibility to test that things fail would be great.