Open uweschaefer opened 5 years ago
Just to be sure, Uwe, you are talking about the code generated by the redg-extractor, right?
If you are, well... I believe the main problem is not the generated code itself, but more that the extractor was never really used as planned due to budget constraints. The idea was/is to convert the previous/existing data (in our case the old Excel sheets) to RedG code and then minify/optimize that code until it matches RedG's vision of minimal self-contained test data (in the best case separated by test case/test groups). This is also mentioned in the documentation.
As you can clearly see, this was not really done and thus we end up with these far too large classes with far too many (and too detailed) entities not making use of some of RedG's most basic features (default values and dummy generation) in many cases.
So while I think that RedG is not to blame here and the tool was sadly "misused" (honestly in a way I should have predicted) to simply get rid of the Excel sheets, I understand that this is not a satisfactory solution and you cannot simply invest the dozens of person days probably needed to properly refactor this. So I am willing to implement a "fix" for this, even if it encourages the aforementioned "misuse" of the RedG extractor.
My first approach would simply be to split the generated code into a "method tree" with a configurable maximum method length. Not pretty, but seems like a good and easy compromise to me. What do you think?
sounds about right to me. not pressing, not blocking anything, importance is arguable - just thought i'd mention it here, as it might be an improvement (a purely technical one, that just helps with bytecode-manipulating tools and the like..).
Not sure someone would profit from making the (obviously arbitrary) method size configurable, though.
Many tools (for instance JProfiler) have problems with this kind of generated Code.