tkeskita / BVtkNodes

Create and execute VTK pipelines in Blender Node Editor
GNU General Public License v3.0
111 stars 19 forks source link

Testing framework #57

Closed thomgrand closed 3 years ago

thomgrand commented 3 years ago

Since BVTK already sports a lot of features and will receive re-iterations of its core (#46), I wanted to start the discussion of how to best test the addon. Here is the first iteration of my suggestion:

I already provided a standard test case, which calls the new Update All node that updates all converter/writer nodes and finally looks if the meshes and files were generated. Some of the test cases are the ones provided in the examples folder in the repo. Custom scripts can be generated for extended checking of the scene. In the global time keeper test, the generated cone is tested in each frame for validity. Output meshes could also be generated that can be compared to a reference mesh (test_glyphs_and_writers)

There are obvious benefits of having a testing framework, but here are some pros and cons of this implementation

Pros

Cons

tkeskita commented 3 years ago

Hi Thomas,

it would be great to have a testing framework, I think @NickGeneva has also suggested to explore this, but I was unsure if Blender can be bent to this / what end result to use in test. Here's some first comments (haven't checked your code yet):

So, idea is to compare stdout&stderr outputs to expected correct reference, maybe also small mesh objects (or maybe export mesh to some text format and compare that?)? Well, output comparison would be a good start, at least.

It would be better if we didn't need to clutter repo with binary files. Can it be instead so that you populate node tree from JSON export directly?

One thing to consider is maintenance of tests. Can it be made so that only minor changes are necessary when Blender version change and end result changes? Maintenance is a burden, nobody really likes to do it, so I would like to minimize that.

Aside concerning maintenance: I dread a day will come when latest Blender release will require something different or give different results than latest LTS Blender.. I'd like to support primarily latest LTS because of the maintenance burden, but of course would be nice to support latest Blender as well. Comments on this?

I hope I can eliminate need for Update All node in #46.

BR, Tuomo

thomgrand commented 3 years ago

Some more details on the exact procedure: Except for two tests, each test really only calls update/write on all nodes (through Update All) and then checks bpy.data.objects or the file system if any output was written. If not, the error_count is increased in the Update All node, which is later queried by the test script (default: test_blender_script.py ). If error_count > 0, the test script will exit blender with a returncode != 0. I thought of this as baseline tests, which are also very easy to write and don't need any comparisons, but already reveals crashes in the execution of a node tree. Extended test functionality can also be provided by changing the test script (later explained in the text).

So, idea is to compare stdout&stderr outputs to expected correct reference, maybe also small mesh objects (or maybe export mesh to some text format and compare that?)? Well, output comparison would be a good start, at least.

Currently, stdout & stderr are only shown for the developer in case the test fails. I think comparing stdout & stderr to a reference is potentially prone to error as things like additional addons or different versions of blender give different outputs. This would also give a strict requirement on the order of execution of nodes, and would cause problems with debug outputs during testing. test_global_time_keeper & test_glyphs_and_writers showcase how to compare the meshes, either directly in the provided script, or in the script calling blender by using a writer node in the tree which is also called by Update All.

It would be better if we didn't need to clutter repo with binary files. Can it be instead so that you populate node tree from JSON export directly?

I'm also not a huge fan and I also thought about that. Do you have experience with the stability of the JSON exporter/importer? Locally for work I mostly save everything in the .blend file. Some features like keyframes are stored in the .blend file, but very likely not in the .json. I also would need to look into how the import can be scripted.

One thing to consider is maintenance of tests. Can it be made so that only minor changes are necessary when Blender version change and end result changes? Maintenance is a burden, nobody really likes to do it, so I would like to minimize that.

Aside concerning maintenance: I dread a day will come when latest Blender release will require something different or give different results than latest LTS Blender.. I'd like to support primarily latest LTS because of the maintenance burden, but of course would be nice to support latest Blender as well. Comments on this?

I agree that this is a huge issue since testing will always be closely linked with the blender & vtk versions. The testing framework (BVTKMainExamples etc.) should only require you to change your system environment variable BLENDER_PATH to the newest version. The command line options --background and --python were already available in the oldest available API doc (https://docs.blender.org/manual/en/2.79/advanced/command_line/arguments.html) and will hopefully be featured in future releases.

Changes in the scripting environment of blender would be maintainable in the provided .py scripts, executed inside blender. If multiple versions with different scripting API are to be supported, one could query the version inside the script using bpy.app.version.

Should results change, it depends on the exact test case: The writer testcase writes to the test/output/ directory (cleaned before and after each run) and this output is then processed in the unittest directly. If the vtk output would change between versions, the reference file in test/test_data would need to be updated. Easiert to maintain, but much more work intensive to write: test_global_time_keeper.py tests the correctness of the mesh directly inside the blender script, checking if the cone's radius (blender mesh) is equal to the expected radius (node property). Like the standard test case, the script will exit blender with a returncode != 0.

NickGeneva commented 3 years ago

Hey Thomas,

I love the idea of building some unit testing. I think the best course of action is to try to develop these tests with the end goal being continuous integration (CI). Although running things locally can work initial, I think the proper approach is to set up a clean environment on some platform (e.g. CircleCI or TravisCI) and run the tests there. But dont bother with CI as of right now until things are running properly.

I agree with Tuomo that ideally the test script would execute in the follow workflow: Load Blender -> Load BVTK JSON -> Execute update -> Export Results -> Compare So in your test folder you would have a main script that would call individual test scripts that would handle this work flow for specific problems. (Although I too save everything to .blend files Lol)

The JSONS would allow us to set up a bunch to tests (not sure about the data) that maybe reflect the standard VTK examples found here: https://kitware.github.io/vtk-examples/site/Python/ cleanly I think.

I think the Stdout and stderr is a pretty neat idea. Although I suppose the ideal would be to compare like vertex locations, texture information, etc. But I have no idea how to do that as of right now. You could also export vtk objects and compare those with the true ones (this would ignore the vtk to blender converters), not sure.

The tricky part with this library is that its designed to process data, and you want to run tests with that data, but you dont want that data in your repo.

Regarding @tkeskita comment, after some investigation, it may seem to be possible to set up a CI environment with the Blender executable. This would allow us to specify specific Blender versions we want to run tests on. Examples I have found:

This would literally download a fresh Blender executable of a specific version and then run the tests on this most recent copy, removing any concerns regarding local dev environment discrepancies.

I have some limited experience setting up unit testing on circle CI. The workflow itself is quite nice but setting it up was not the easiest thing in the world.

tkeskita commented 3 years ago

Thomas,

thanks for clarifications. OK so not primarily text comparisons, but test existence of result object or data. I agree, for crash testing this works.

Currently, stdout & stderr are only shown for the developer in case the test fails. I think comparing stdout & stderr to a reference is potentially prone to error as things like additional addons or different versions of blender give different outputs.

Yes, I think there would need to be test like re match for only a certain success info line etc. in the output, not full line-to-line comparison. May work for some test cases, I understand it depends what you want to test for.

I'm also not a huge fan and I also thought about that. Do you have experience with the stability of the JSON exporter/importer?

Sorry, not really, but it's worked pretty nicely for the examples. tree.py uses dictionaries a lot for data conversion. It would benefit from some review and polishing. There's also WIP unused function node_tree_from_py which may be interesting to check. Alternative for JSON to build test node trees is to prepare a py file which adds nodes, see #2. But if the idea is to build lots of node trees, then I think JSON is nice. It's pure text, so you can edit the file text if you need to change something small. Maybe there could be added a feature for some additional test custom py code in JSON which is run after importing nodes, to insert keyframes etc.? I wonder if Blender's context requirements prevent that..

Changes in the scripting environment of blender would be maintainable in the provided .py scripts, executed inside blender. If multiple versions with different scripting API are to be supported, one could query the version inside the script using bpy.app.version.

OK, but let's hope we don't need to go there. For VTK version, I think it's OK to support only a certain (preferably latest) VTK available via pip install.

tkeskita commented 3 years ago

@NickGeneva thanks for looking those up! I've got no experience with CI. Yes, let's get back to think about CI later on when we have some tests set up.

thomgrand commented 3 years ago

Thanks for the input. I uploaded a new version that loads the node tree from json using the functions from BVTK. I changed some of the json import/export code to

All tests (except global_time_keeper because of the keyframes) are now initiated through .json using the same template blend file. The template file is just an empty scene with an empty generated node tree, so every user starts from the same state.

If CI could be setup for this purpose, it would be really great in my opinion! My experience with setting it up (especially with so many dependencies) is limited though.

tkeskita commented 3 years ago

Nice!

Would it be possible to start Blender via --factory-startup and create the node tree by commands? That way there would be no need for any template files, if it works.

Would it be easy to write the vtp file in full ascii format, just to make data readable?

thomgrand commented 3 years ago

Would it be possible to start Blender via --factory-startup and create the node tree by commands? That way there would be no need for any template files, if it works.

I tried this using operators to set up the node tree from scratch, but I couldn't get a correct context for them to work. It seems that when running a blender with a python script, context capabilities are reduced. Here's some code snippet that works inside blender, but not through the script (bpy.context.area is None).

bpy.context.area.type = "NODE_EDITOR" #Change the UI to node editor type
bpy.context.area.ui_type = 'BVTK_NodeTreeType' #And switch to BVTK Node tree view
bpy.ops.node.new_node_tree() #Create a new node tree (this assumes there is none present in the .blend file)
bpy.ops.node.bvtk_node_tree_import(filepath=json_fname, confirm=False)

I tried some workarounds that people posted on stackexchange, but nothing worked for me. I think one blend file for the bulk of tests is manageable.

Would it be easy to write the vtp file in full ascii format, just to make data readable?

Yes, there's a property on the XMLPolyDataWriter that handles the encoding (DataMode), which can use full ASCII mode. Alternatively, it would require a change of writers (XMLPolyDataWriter -> PolyDataWriter) in the json test setup and using the .vtk extension. This however increases the potential file size. The only benefit that I can see is if you plan to compare it to a target mesh manually, which would require parsing of the file. Higher-level libraries like pyvista can read both formats. If you plan on comparing very small data (< 30 points/cells), I think the simplest way would be to compare the mesh directly in the blender python script, or write the points/cells into the unittest and compare it to the loaded data.

On a side note: I stopped using the .vtk formats if possible since I noticed VTK marked them as Legacy a long time ago.

tkeskita commented 3 years ago

OK, thanks for info! One .blend is OK. For ASCII vtp I was thinking if you need to place such files into repo it would be nice to be able to read the file contents in text editor, but it's not a big issue.

thomgrand commented 3 years ago

Ok, I can reduce the number of points and write it in ASCII format if that's easier to maintain in the repo.

How would you like to proceed? If you want, I can start with the clean up (squashing a few commits to get rid of the blend files) and write a small section in the documentation about how tests can be made and executed.

I also marked a change that is not directly connected to the testing framework, but was necessary to update nodes on frame changes inside the .py script.

tkeskita commented 3 years ago

Hi, sounds like a plan! You can either add a section to existing docs or write a separate .md or .rst.

thomgrand commented 3 years ago

I now squashed the commits and added a development section in the documentation that outlines how to run and write tests.

tkeskita commented 3 years ago

Hi Thomas,

nice work! I tried to run python test/test_main.py on Ubuntu 20.04 and Blender 2.83.12. It seems to open Blender but stops at splash screen. When I close splash and quit Blender, it proceeds to next test, opens Blender and again stops at splash. I guess this is not correct behavior? Although when all tests are completed, it says

.....s..
----------------------------------------------------------------------
Ran 8 tests in 27.285s

OK (skipped=1)

Would it be OK to save the Blend files with v2.83, it gives version warning when I try to open?

I wonder about adding --factory-startup --addons BVtkNodes options. Theoretically there could be another add-on installed which might conflict with BVTKNodes and the tests.. but maybe this is unlikely and not needed?

tkeskita commented 3 years ago

I changed some of the json import/export code to

* Make the json export more compact upon request

Was the reason for this only to get rid of whitespaces? I like it when small(ish) files are human readable in text editors, indentations would help there..?

thomgrand commented 3 years ago

nice work! I tried to run python test/test_main.py on Ubuntu 20.04 and Blender 2.83.12. It seems to open Blender but stops at splash screen. When I close splash and quit Blender, it proceeds to next test, opens Blender and again stops at splash. I guess this is not correct behavior? Although when all tests are completed, it says

I just reproduced this in Ubuntu 18.04. It was an issue with the subprocess starting a new shell and should be now fixed.

Would it be OK to save the Blend files with v2.83, it gives version warning when I try to open?

Sure, but since I don't have it installed, I would ask you to resave and upload the file.

I wonder about adding --factory-startup --addons BVtkNodes options. Theoretically there could be another add-on installed which might conflict with BVTKNodes and the tests.. but maybe this is unlikely and not needed?

That could be a possible issue, but that would probably require a bit more sophisticated scripting to import the BVTKNodeTree using json (previous discussion). I think currently it's reasonable to require to have conflicting addons disabled while testing.

I changed some of the json import/export code to

* Make the json export more compact upon request

Was the reason for this only to get rid of whitespaces? I like it when small(ish) files are human readable in text editors, indentations would help there..?

It was mostly to save some space on the repo. I re-uploaded the json files with indentations enabled again.

tkeskita commented 3 years ago

Thanks! I did some editorial on the docs. It seems that the tests run fine in 2.83 even if blend files are saved in 2.90, so I didn't upgrade those.