Closed ggrosshenny closed 6 years ago
Thank you for your pull request! Someone will soon check it and start the builds.
Could you let us know about the json dependency and its related license?
@ggrosshenny Thank you for you PR. Being able to save timing in JSON is a very nice idea and as more and more sofa component are using JSON so it make sense to have that.
Now, some feedback about the implementation:
There is a lot of new functions but there is not test for any of them.
The way you handle errors in the binding code is now deprecated as it was not following the python standard. The proper way is that failure in binding code must return NULL (and not Py_RETURN_NONE). You can add an execption and a custom message with PyErr_SetString(...) but you don't need to do that when the failure is because of PyArg_ParseTuple...as it set the message itself so that it is standardized.
So, it should be like that:
if(!PyArg_ParseTuple(args, "sO", &id, &tempNode))
return NULL;
Instead of:
if(!PyArg_ParseTuple(args, "sO", &id, &tempNode))
{
PyErr_BadArgument();
Py_RETURN_NONE;
}
You are commenting the python binding code using doxygen, thank you very much for that ! As these are python functions it would be great if you could also provide python docstrings (as this is the python standard of documenting code). You are using the SP_MODULE_METHOD macro which have and empty docstring by default... sofa is missing a macro one to specify the docstring (it exists for SP_CLASS_METHOD_DOC). So to document you functions you first need to add a new macro close to SP_MODULE_METHOD with:
#define SP_MODULE_METHOD_DOC(MODULENAME,M, D) {#M, MODULENAME##_##M, METH_VARARGS, D},
Then you can document the python code in a way similar to when using "SP_CLASS_METHOD_DOC".
the externals libraries should be in the repository extlibs/* so sofa/helper/json.hpp should be moved there.
I have no opinion whether the JSON output should be activated only when the json.hpp is found or if we should systematically ship this lib with sofa. @hugtalbot probably have better opinion on these aspects as this have connection with the Licensing aspects.
For code coherency I think that we should avoid using "weird" namespaces in the sofa code base (eg: 'nlohmann::json') so that I suggest you to add a sofa/helper/json.hpp file with something more or less like that:
#include "extlibs/json/json.hpp"
namespace sofa{
namespace helper{
using nlohmann::json ;
}
}
Would be nice as codes that want to use it will have to do it this way:
#include "sofa/helper/json.hpp"
using sofa::helper::json
Thank you for your feedback @damienmarchal !
I've done most of your enhancements, but I can't figure out what tests we can create for this PR. Do you have some suggestions ?
Hi @ggrosshenny
For the tests I think that the ideal target of testing to have a good code coverage (https://en.wikipedia.org/wiki/Code_coverage). A very strong level of testing may requires "path coverage". This means that all the execution path are at least executed once. This is very hard to achieve an so a more manageable target is to be close to "function coverage"+"branch coverage" in which each line of the function is at least executed once. I found that in sofa a succesfull strategy is to make tests with "Parameter value coverage" in which in a method taking parameters, all the common values for such parameters be considered. This give good results and is really easy to do.
More concretely in your PR you could:
test each python function with different parameters. At least one with a valid value, 1 with out of bound values, 1 test with extrema values (min & max) and if there is multiple parameter...you either have to do a cartesian product, random values or fixed lists of parameter sets.
eg for Sofa_setOutputType... the following tests would be nice:
Sofa_setOutputType("validID", "JSON")
Sofa_setOutputType("", "JSON")
Sofa_setOutputType("invalid", "JSON")
Sofa_setOutputType("validID", "LJSON")
Sofa_setOutputType("validID", "STDOUT")
Sofa_setOutputType("validID", "")
Sofa_setOutputType("validID", "invalidType")
And in Sofa_end:
Sofa_end("validId", aNode)
Sofa_end("", aNode)
Sofa_end("validId", 1.0) # This shouldn't crash but report a python exception
Sofa_end("validId", None) # This shouldn't crash but report a python exception
A weaker approach of testing (if you really lack of time) is to only test "the good" case and not invalid values. To me this is only acceptable if the function is not accessible by the users and never process data provided by users.
More feedback:
PyObject* o;
if(!PyArg_ParseTuple(args, "sO", &id, &o)){...}
advancedTimer::setEnabled(id, PyObject_IsTrue(o));
/// In void AdvancedTimer::setOutputType(IdTimer id, std::string type)
else {
msg_warning("AdvancedTimer") << "Unable to set the type to '"<< type << "' Switching to the default 'stdout' output. Other valid types are [stdout, JSON, LJSON]."
data.timerOutputType = STDOUT;
}
Hope this will help,
Hello all,
I've added some modifications on this PR.
First I've created a new Python script in PythonSofa to make the use of the AdvancedTimer easier. Now, if you want to use it in a Python scene, the easiest way to do it is to use the script with the method measureAnimationTime. You will have to add this line:
from SofaPython import PythonAdvancedTimer
at the beginning of your script. Then you have to add the method bwdInitGraph(self, node) method at least as the following:
def bwdInitGraph(self, node):
# It will call the simulationStep method too
PythonAdvancedTimer.measureAnimationTime(node, "timerPoutre",
2, "ljson", "poutre_grid_sofa_timerLog", 0.1, 1000)
return 0
If you already have defined a bwdInitGraph, you juste have to add the measureAnimationTime() method to get your scene analyzed.
To use it by your own way, you can take a look at the PythonAdvancedTimer.py script. You'll find it in the Sofa project at SofaPython/python/SofaPython/PythonAdvancedTimer.py.
Second, I've added two scripts to use the returned light JSON files of the AdvancedTimer : TimerLJSONPlot.py and timerLjsonManyFilesPlot.py. I've also added the documentation to informe future users how to use the AdvancedTimer and the plotting scripts.
The tests will come soon. Thank you for your feedback @damienmarchal .
I tested your work yesterday in the evening and I really appreciated the easy usage! I would like to add some remarks that might be interesting for future improvements:
More ideas might follow, thank you for your investment of time in this nice feature that I will use certainly!
Hi @damienmarchal
I need a review of the first version of the unit tests.
[ci-build]
What do I(we) do now ? Should I add other tests ? @damienmarchal @guparan
@untereiner sorry I didn't saw you expected feedback on the tests. I will do that next monday.
Hi @damienmarchal ! Do you have any comments on the unit tests ?
Thanks.
To me it is enough ;) So unless someone complain... and the CI is ok let's go for the ready tag. @hugtalbot , @bcarrez
[ci-build]
I changed a little bit the unit tests. I will add a last commit in a few minutes
Lets [ci-build] again
last [ci-build] 🙏
[ci-build]
It does not compile anymore.
But why? 😢 [ci-build]
Build is OK now but there are still a bunch of failing tests. [ci-build]
[ci-build]
This PR is en enhancement of the AdvancedTimer. We have added the possibility to get two different JSON ouputs. The first one represent all informations without deep limitation and the other is easier to use but does not represent all the componants tree of the simulation.
You can now use the timer in C++ and in python, change dynamically the output type of each timer separately and use the output of the AdvancedTimer to create graphs and/or compare datas from two different scene timers.
You can find an exemple of JSON output here : JSON_example.txt And an exemple of Light JSON output here : LJSON_example.txt
This work was done with the help of @untereiner and @chpaulus. A documentation will come at the end of my internship.
suggested tag : enhancement
This PR:
Reviewers will merge only if all these checks are true.