Open GitMensch opened 2 years ago
Yes, this surely needs to be optimized. Even so, the slow down, measured in real-world applications, is not as big as it might appear.
Totally - the DB "behind" is slower. With that optimization the code will get smaller, more clear and has less options to go wrong - those are the more relevant parts.
Also: they area already removed from the "set DB to COBOL" code (most reasonable with directly operating on the program's buffer), so only the "unpacking" code would benefit from this change.
Seen when needed to fix an upstream bug which is solved here already: Currently the
real_data
and final data (and other "data") are allocated and freed on each access. This just does not make any sense.Please adjust this to have either a single static area for numeric "real data" or local in each used part. For anything that isn't float the
COB_MAX_DIGITS
seem enough [for supporting bigger values you can also a higher value than GC has; in any case it is reasonable to check the digits also in ocesql]: 38 -> `numeric_real_data[38], similar for packed (define + 1) / 2.Float data would be larger.