Closed PowerUser1234 closed 2 years ago
I have updated what I have originally had to say. Does Semeru have something active to say, about the possibility of implementation, or pointing me in a more apprehending direction, based on what I have submitted here, alone?
The FYI applies to separate vendor versions of the OpenJDK. Is there someone else than pshipton, involved with Semeru, who can read and consider the spirit of this matter?
There is also https://github.com/eclipse-openj9/openj9/issues/15123 which was answered by Dan.
-I had understood that eclipse-openJ9 and Semeru were separate efforts, which separate people involved. How do they exist in relation to one another, if they especially do?
-I have edited the contents of my initial request which sits at the top of this thread. It may warrant some re-reading.
-I had hoped to not have to submit a prototype on this issue, since I don't have the alternative language skills, binding knowledge involved. I possess WordPress, HTML, CSS, and Database skills, and unfortunately don't really have room for any more else! We are at our limits on this issue, and believe we have to tap into the community for this issue.
-I had hoped that the explanatations of flawed and bugged software, slow and inneficient workarounds, would be sufficient, particularly since vendors are willing to otherwise respond to known bugs, just on their own, in their language software. Floating point change is not a personal preference, it is a matter of logic and rationality, and not some kind of personal preference. As contiguous ranged mathematics accuracy implies on its own, in base 2, 10, and 16. Consistency is the key.
-One reason why people can't well be gathered around floating point repair is the nature of the shut doors on this issue from the Java Community Process et al. Many people are unfortunately resigned to the fact that there is no way to gain traction on issues like this, and so popular interest has plumetted. Oracle has indicated that they themselves may never remove the floating point errors, but they should be removed, despite disinterest or even other language attitudes to floating point. I was hoping that Semeru under IBM would be more interested in a unique topic like this, even if openJ9 is not. I had understood that Semeru was an independent project. How may Semeru people respond to me?
Semeru is the IBM builds of the Eclipse OpenJ9 project. Semeru uses modified Adoptium pipelines to do the builds, but doesn't otherwise change anything.
-Didn't know that. I thought that all these were separate Java projects.
-Is there anyone involved who have been able to read and consider my posts like here, and in a few other places, who are prepared to consider, even reconsider, the vendor issuing of Java with floating point errors? Despite all things involved, erven attitudes taken to it by other languages, it is still a weakness which should be repaired. The difficulty also becomes the Java Community Process's attitude to this, and its state of total shutdown. Are the other vendors prepared to do something in this area to correct and uphold accuracy and contiguity properties with the two Java floating point types?
Hi @PowerUser1234. All of the vendor JDKs depend on (and work at) the OpenJDK project to decide how the Java language should evolve. The proper place to discuss it is at OpenJDK, as I understand you've already done. If the discussion at OpenJDK does not lead to these kinds of changes to the floating point behaviour in Java that you're looking for, then I don't think any of the downstream projects will take it upon themselves to change things because compatibility with the language standards (and the ability to pass certification testing) is critically important for all these different JDKs to be trusted in the ecosystem.
Your best bet to inspire change in Java is to inspire Oracle, even if you have not had success there yet. They are the stewards of the Java language and hold all the keys required to make such changes. Semeru won't change this behaviour without more buy-in from the Java community, and that will happen at OpenJDK not Semeru.
Dear Java Vendors,
Java floating point errors occur by means of a representation issue to do with using the IEEE 754 floating point binary equation scheme during storage and arithmetic, and key class methods, involving floating point dot separator (“decimal point”) types in Java, namely, float and double, along with incorrect implementation in method logic inside the java.lang.StrictMath class.
Floating point errors in Java do occur in the decimal and hexadecimal “units in the last place” and further, beyond the end of digit accuracy to the right of the he dot separator, as is visible from the System.out.println() method. When they do occur, there is either a binary overflow or underflow. Those results, when converted back to base 10 or 16, break the contiguous range property of a ranged base 10 and base 16 number line, at an unknown magnitude to the represented base 10 or 16 value that is higher or lower than digit range accurate. At such an error instance, the wrong mapping is produced, as a duplicate to another mapping, and the correct mapping is skipped.
The following is a version of the IEEE 754 floating point binary number formula used in Java for float and double.:
It is defined by the the sign of the number's positivity or negativity, mantissa (data digits) of the registry in binary (m2), the exponent of the registry in binary (e2) which is the position of the dot separator from its presumpt starting point; this is all for a 32 bit or 64 bit numbers in Java (float or double), which are the value(s) for L, 32 or 64.
It is impossible to understand who could possibly, in any meaningful way, need or rely on, to start with, exactedly, an inconsistent and chaotic number value system, particularly in base 10. It is an error and a bug situation that should be corrected by a Java language vendor, and even the IEEE in their document which seems to be the origin of this.
Some people might claim that they want value approximations, noting a time reduction involved. But more important is the need for accuracy, which full and concrete thinking justifying and with no fuzziness involved.
Floating Point article published by Princeton University: (see the section on Real-world numerical catastrophes.) https://introcs.cs.princeton.edu/java/91float/
Java JDK/JRE Java runtimes generate floating point arithmetic or java.lang.StrictMath overflow and underflow errors within values as a result of Java source code like this, either in the defaulting base 10 mode, but also when hexadecimal notation for base 16 numbers starts to be used, which is denoted with a 0x inside float and double, the two Java homes of floating point. These are presently all error phenomena which can and aught be fixed. The classic example exists on the next page:
Program has started...
0.010000001
0.010000000000000002
[0.01]
Program has Finished.
The standard that Java turns to for floating point, non-integral arithmetic is IEEE standard 754. While the IEEE is the home of the MAC address wordlwide, its standard document 754 doesn't specifically say anything about the base 10 or base 16 digit degradation via overflow or underflow that does happen at the right hand side of floating point decimal and hexadecimal number type value data past the dot separator, past the end of accuracy. The view which is somewhat older now that floating point can be an approximation trade-off for range accuracy, which is not contained in its name, which only implies mobile positioning of the dot separator, has become more than problematic. This is a view which is now compromising further access to the alternative of approximation, which must carry more weight.
One of the primary views around numbers, arithmetic and functions in modern times is that binary is for computers, and denary is for human beings. An approach which mixes these two up at the same time, while not maintaining separation between these concerns, only leads to logic confusion and errors. In the OpenJDK, float and double are the main offenders of this, however a similar problem occurs with java.lang.StrictMath method calls. Most importantly, this happens in relation to base 10, but base 16 will have the same problem, within the two Java floating point types. All examples of such are logic errors, that need to be repaired in either a default or mutual way, simply because denary accuracy is required out of a compiled and running Java program as efficiently as possible. Relevant Computer hardware that the writer of this document has in mind is the Desktop PC, running any ubiquitous operating system, or as configured as a database or internet server, as of 2022. Hardware and OS platforms that Java continues to install on by default.
Error workarounds are used to try to cover floating point errors, being BigInteger, BigDecimal, and the big-math function library and similar, in Java. They introduce an entire tier to Java software which isn't needed. BigInteger and BigDecimal are slow and produce a loss in speed, are larger in RAM and waste memory, and don't allow the use of arithmetic operator source code syntax. To say nothing about the absence of an included data accurate type calculator class. But BigInteger and BigDecimal only work outside classes or interfaces; if the internals of library classes or interfaces are written naively, or in any floating point error vulnerable fashion, and cannot be decompiled and are source code inaccessible, or are bound with other (unknown) computer language(s) in that state of affairs which won't be changing in the context, you are stuck with value errors being able to occur and corrupt the software. Things that developers and their programs always need good and better solutions for. The IEEE should include or state something new in its standard 754 to encourage software language vendor(s) to implement floating point arithmetic more completely for overflow and underflow concerns, but if it doesn't, while the difficulty of change grows, programmers and vendor(s) are left to act on their own somehow. Oracle and the Java Community process have not apprehended repeated bug requests on their bugs system, and have chosen not to act further despite multiple discussion attempts about the reasons and needs involved, on their relevant public email lists. While it would be most appropriate for the most upstream vendor to implement these corrections, in face of a total refusal, the best remaining option for corrections that should, must, happen is to inquire of other vendors, being the purpose of this document.
Floating Point correction, in its most commensurate manner in relation to where the OpenJDK is right now (2022), can't be done with total compatibility, but can be done with majority compatability without its present errors. The entire change set per one version of Java can be contained in a separate, optional installation patch. There can be a floating point mode on/off switch for the runtime, classes, interfaces, fields, methods and operators with data, the only way to make floating point types and values range accurate without changing or removing the IEEE 754 equation binary mapping scheme is to augment or lengthen the 32 bit or 64 bit array, by some unknown and varying amount. The impacts of that in the Java language in hardware, but particularly the associated default Java libraries, are simply too huge to justify, along with departing from 32 and 64 bits in strict terms. The only other alternative, which is to adjust the curve equation and the binary to decimal and hexadecimal mappings Java uses for floating point storage and arithmetic evaluation to floating point storage of those types of data, despite adherence to standard or anything that that does (and in the end doesn't) mean:
At the moment, the equation treats whole numbers and fractional values in decimal or hexadecimal differently. If the curve treated fractional values and whole values exactly the same, as digits symmetrical and integral around 1 and 0, then the present representation problems will go away. There are penalties involved with that suggested approach, but they are better than escalating calculations raising further difficulties.
A) The first penalty is that the fractional value ranges will end up having to parallel the whole value ranges, which will mean a range reduction from the first state of affairs, this:
to this:
With no way to count the number of floating point error base 10 results, and no unit conversion for any reciprocal, asymptotic fractions, yet by percentage calculation from the absolute value of the exponent of 10, I calculate approximately the following range losses will be involved:
The partial values will have to miss out for a shorter range, but in fact an accurate range without multiple holes and false duplicates in it; instead of a flawed range that lacks internal number mappings outright, and responds by issuing incorrect, repeated number maps. It can be argued that since it is not contiguous, and contains floating point errors, than in mathematical terms, there is no continuous or contiguous range present in float and double. It will also be the case that the vast majority of the time, in practice, and by obtaining float and double values through System.out.println(), these differences will almost all of the time never be visible or relevant anyway.
B) Floating point arithmetic binary value calculation will have to change. The bit manipulation and production will be different, and in a new context of different ranges. There will be no final decimal or hexadecimal digit rounding, just sheer truncation, and the ability to still perform the inverse of the previous arithmetic operation accurtely should give the exact same original result, a property which is almost always there now, but may need a little improvement. There is integer but decimal division too, comparisons with positive and negative whole and decimal values also, and even decimal positive and negative remainders from the relevant operator. The Java floating point arithmetic operators are:
I have included the increment and decrement operators. It is the belief that systematic change on key use points called upon by systematic code will solve many problems, that it will correct other code by pulling systematic approaches along with it, but the fact will be that due to issues involving fine access internal to the whole of this matter, via bits to or from data, will mean that some changes will not only wash over, or entirely be able to have compatibility from the old to the new.
C) java.lang.StrictMath has to be repaired. The decimal values that it displays via System.out.println() all seem to be accurate, although I have not checked more deeply, meaning that farther, “deep” range values might have to be fixed. But the methods create floating point errors too, somehow, and will have to be re-implemented. Possibly by an appropriate C library, leveraged more carefully than not as present, with multi-platform final use still in mind.
D) What does this mean for Compatability, and Java class libraries? It can mean that bit manipulation and shift operators that are used on type converted data values from and to floating point types won't be the same for decimal and hexadecimal data; that form of programming as its Java implementation stands now, will be separate to newer schemes, with previous, now “false facts”, about range values, and about how floating binary relates back to other numeric thresholds. The bit manipulation and shift operators themselves are fine, its just the way that they can apply under the old way of doing things with floating point which will create distinct problems. These will have to be at least checked or updated, for the entire standard library set of modules, classes and interfaces and possibly updated a bit for the new ranges. Any 3rd party library classes or interfaces that use bit manipulation and shift operators to affect floating point data in an IEEE 754 presumptive manner outside Java's own included default libraries will have to be avoided, or dealt with in broader manner, a subject I am researching at the present time. Most of the time, 3rd parties don't do this, or the issue can be sidestepped anyhow.
It means that anything around float and double that is written to file, read from file, communicated or finely processed may have to change, particularly use that presumes the IEEE arrangement very particularly. Functions named for IEEE could be ignored, since type conversions and other use functions will do the updated, requested work. They could be replaced internally, or they can be given companion methods that just perform the same general task, with consideration to the new state of things. Generic programming will not require change, a phenomenon this Java correction enterprise will rely on, since flexibility is a key advantages to reducing the workload.
D) Things like Serialisation, Remote Method Invocation RMI may or may not have to change, depending on how information gets associated and then apprehended for transmission. It depends on how they finally act, or what they do or don't presume as they read or write or process. In Java, the ranges for float and double are asymmetrical around one, with more provision for digit fractions. By changing the equation by half, you gain the perfect mapping accuracy of the positive part of the curve into the smaller values part as well, dispensing with the complexity, gaps and false, spurious mappings that floating point is presently heir to from flawed logic, leading to neglect and erroneous circumstances, with no warning and in no coherent magnitude or direction that can be efficiently repaired within equivalent syntax and no range internal replacements being necessary at all.
Is the your Java team able to update its JDK and JRE offerings for all platforms, to either repair at default or at switched capability, these floating point logic errors?
I would be thrilled to hear about a positive response!