Closed poorna2152 closed 2 years ago
Javascript doesnt support 128bit decimals default. There is a proposal for this which is not implemented. Possible implementation strategies I can think of.
Using decimal.js Javascript library. However yet to confirm this
library follows IEEE 754 128 bit decimal
representation. Implementation is similar to our current string implementation.
$bal$decimal
)decimal_construct
function which is defined in Javascript which takes a offset and a length into the memory where the string decimal value is stored and returns a Decimal Javascript object. This Decimal Object is then stored in the global variable.Using the v128
in wasm.
v128.const
instruction needs const value to be either 2i64, 4i32, 8*i16...)Managed to convert the decNumber to wasm. Then converted that wasm file to wat. However the functions in the library seems to be dealing with memory pointers. (offset into the linear memory)
For example for the decQuadFromString
function in C library is mapped to following function,
extern decQuad * decQuadFromString(decQuad *, const char *, decContext *);
(func $decQuadFromString (export "decQuadFromString") (type $t3) (param $p0 i32) (param $p1 i32) (param $p2 i32) (result i32)
Attaching the smallest function I could find below
(func $decQuadDivide (export "decQuadDivide") (type $t6) (param $p0 i32) (param $p1 i32) (param $p2 i32) (param $p3 i32) (result i32)
(local $l4 i32) (local $l5 i32) (local $l6 i32) (local $l7 i32) (local $l8 i32) (local $l9 i32) (local $l10 i32) (local $l11 i32) (local $l12 i32) (local $l13 i32) (local $l14 i32)
(local.set $l4
(global.get $g0))
(local.set $l5
(i32.const 16))
(local.set $l6
(i32.sub
(local.get $l4)
(local.get $l5)))
(global.set $g0
(local.get $l6))
(i32.store offset=12
(local.get $l6)
(local.get $p0))
(i32.store offset=8
(local.get $l6)
(local.get $p1))
(i32.store offset=4
(local.get $l6)
(local.get $p2))
(i32.store
(local.get $l6)
(local.get $p3))
(local.set $l7
(i32.load offset=12
(local.get $l6)))
(local.set $l8
(i32.load offset=8
(local.get $l6)))
(local.set $l9
(i32.load offset=4
(local.get $l6)))
(local.set $l10
(i32.load
(local.get $l6)))
(local.set $l11
(i32.const -2147483648))
(local.set $l12
(call $f627
(local.get $l7)
(local.get $l8)
(local.get $l9)
(local.get $l10)
(local.get $l11)))
(local.set $l13
(i32.const 16))
(local.set $l14
(i32.add
(local.get $l6)
(local.get $l13)))
(global.set $g0
(local.get $l14))
(return
(local.get $l12)))
Does pointing to the stack work? ie
#include <stdint.h>
__uint128_t decQuadDivideWrapper(__uint128_t a, __uint128_t b) {
return decQuadDivide(&a, &b);
}
Asking this because I'm not very experienced with C.
So decQuad
is defined like this,
typedef union {
uint8_t bytes[DECQUAD_Bytes]; /* fields: 1, 5, 12, 110 bits */
uint16_t shorts[DECQUAD_Bytes/2];
uint32_t words[DECQUAD_Bytes/4];
#if DECUSE64
uint64_t longs[DECQUAD_Bytes/8];
#endif
} decQuad;
And the signature of decQuadDivide is like this.
extern decQuad* decQuadDivide(decQuad *, const decQuad *, const decQuad *, decContext *)
Would the above work if for these definitions
Yes, since above struct is 128 bits long, the decQuadDivide will see no difference.
But there is a different issue when I compile above wrapper to wasm, it doesn't get compiled to v128
, maybe they only use v128
for SIMD. If that is the case we'll have to use two i64
s. can you see how to use v128
in plain wasm and if it's slower than two i64
s?
Emscripten has a <wasm_simd128.h>
header.
This introduces a type v128_t
and functions to deal with that type. Using that I wrote the following code.
Example in their repo.
#include <wasm_simd128.h>
#include <stdint.h>
extern decQuad * decQuadDivide(const decQuad *, const decQuad *);
v128_t decQuadDivideWrapper(v128_t a, v128_t b) {
__uint128_t a128;
__uint128_t b128;
wasm_v128_store(&a128, a);
wasm_v128_store(&b128, b);
return wasm_v128_load(decQuadDivide(&a128, &b128));
}
This resulted in a function signature.
(func $decQuadDivideWrapper (export "decQuadDivideWrapper") (type $t110) (param $p0 v128) (param $p1 v128) (result v128)
So we can use this right. Not sure whether I wrote the code correctly.
Ya, maybe even possible to get rid of wasm_v128_store
v128_t decQuadDivideWrapper(v128_t a, v128_t b) {
decQuad* x = decQuadDivide(&a, &b);
return *((v128_t*)x);
}
Again what I am not sure is, if using v128 will slow things down/ or restrict us due to simd (in runtime). Shall we check that first. If that is the case we'll have to use two i64 (maybe as i64[2])
I am not entirely sure how I should test it or what I should look for. However I tested a loop which initializes a v128 const for 1000000 time and retrieving the values in v128 as two i64s. With a loop which initialize two i64s and retrieves the two i64s. The performance seems to be almost the same
That sounds good. I think it's safe to proceed with v128 in that case.
I tried to get the decQuadToString
function working which prints out the matching string given the decQuad
.
The signature of the wrapper function for this is,
void _bal_print_decimal(v128_t a)
What I noticed when doing that was,
Javascript cannot handle v128
values in wasm. This gives the error TypeError: type incompatibility when transforming from/to JS
. Since we are calling decNumber
wrapper functions through Javascript we cannot use v128s as parameters, thus we have to pass as 2 i64s.
So now I am storing decimals as v128s and when I want to call a function in decNumber I convert decimal into two i64s.
Example wat file
(module
(import "decimal" "print" (func $_bal_decimal_print (param i64) (param i64)))
(export "main" (func $main))
(func $main
(local $0 v128)
(local.set $0
(v128.const i64x2 1362236 2451858153382346752))
(call $_bal_decimal_print
(i64x2.extract_lane 0
(local.get $0))
(i64x2.extract_lane 1
(local.get $0)))))
Is this because print
function is written in JS? didn't we talk about converting it to wat?
No I have to first instantiate decNumber
wasm module and use its exports as imports to my main wasm module.
For the exports of one module providing as import to another, Javascript acts as the glue code.
Javascript code
// import object to the main module
const decimalImport = {
print: (arg1, arg2) => {
decNumber.print_decimal(arg1, arg2); //calling the print_decimal wrapper function in decNumber
}
};
let importObject = {
decimal: decimalImport
};
WebAssembly.instantiate(mainModule, decimalImport); // providing import Object when instantiating
Here if we provide a v128
to print function in decimalImport object that is going to cause the TypeError: type incompatibility when transforming from/to JS
This also doesnt allow a v128
to be returned from a function. So for example when dealing with decimal_add
it cannot return a v128. Thus we have to access the memory of the decNumber wasm module and retrieve the result when a Not sure how to implement thisdecimal_add
function is called.
I though we'll be able to pass WebAssembly.Module.exports to next module. Looks we can't do it without using WebAssembly interface-types. Even then I don't see a 128 type in interface-types spec.
opt1) Combine the decNumber.wat and output wat textually (when program uses decimals). opt2) Always lower decimal to two i64 opt3) Go via two i64 when crossing JS (what you are proposing above)
opt1 is ugly but it seems like the most reasonable thing for now. what do you think?
Binaryen cannot compile a wat file which is output by emscripten (gave an error about exporting some functions). So I dont think that we can use option one. And also decNumber library wat file has 547327
lines. (which is a whole lot comparing our usual wat output being at most 2000 lines)
I am tempted to go with option 3. (updated the comment)
Problems with decimalJS:
s
, d
and e
of two number, but this doesnt seem to work.eg:
let x = new Decimal("0.00000100") // d = 10, e= -6, s = 1
let y = new Decimal("0.000001") // d = 10, e= -6, s = 1
Eventhough the above two numbers are different precision wise they both give the same s
, d
, e
.
It is not possible to get the representation that the Decimal was created with out of it. For example,
let y = new Decimal("11e11")
y.toString() // 1100000000000
There is a global parameter which could be set called toExpPos which does an exponential notation if the number of digits are more than the set value. But this is not what we want I think.
For the problem of not being possible to get the representation that the Decimal was created with out of it
let y = new Decimal("11e11")
y.toString() // 1100000000000
I thought of handling this by modifying the decimalJS library. What I thought of doing was creating a variable inside the decimalJS which holds the string value which we initialize Decimal with. Then when doing a toString() operation use that stringInit value. This could be used when doing exact equal for decimal consts but not decimal which are produced as a result of decimal operations
1) This is a big issue. That means decimal.js don't care about precision! can that be?
2) I don't think the goal is to preserve the original format exactly. I think only goal is to preserve the original precision. So if I use a negtiave number with e
, the resulting toString should have the correct number of zeros including trailing zeros OR use the original e
number. Both are Ok. What is not Ok is to reduce the number of zeros or give a different negative number with e
. Please verify my claims with the spec.
I have set the precision to be 34. It seems to be ignoring trailing zeros. My plan is add a property to Decimal JS to store information about precision of a number. (number of digits after the decimal point).
That is a significant undertaking, given you have to update it with each operation. I am not sure if decimal.js is worth it if we have to do it our self.
Couple of questions on using decimalJS.
npm install
when setting up.)
https://github.com/poorna2152/nballerina/blob/wback_subset12/wrun/decimal.js