Open Patashu opened 5 years ago
That's nice, and probably work similarly for larger libraries I'll make, however, I felt like the gap between (10^)^9e15 10 and 10^^ee10 is too much; Yes, the numbers are huge, and incremental games will speed up, but I dought that it goes from 9e15 to ee10 instantly, or even 9e15 to 10^^10.
However, at around 10{1000}x, there would probably be only few numbers since nobody cares, at around 10{100000}x, I think it is safe to assume that nobody cares about x other than repetition of {100000}, and around 10{9e15}x it is just impossible.
After some thinking about the problem, I see a more specific problem that's why we can't just dump all the information beyond the biggest two array values.
Let's see what happens in OmegaNum as we approach the third array value:
a = OmegaNum.tetrate(10, 1e15)
OmegaNum {constructor: ƒ, array: Array(2), sign: 1}
a.array
(2) [10000000000, 999999999999998]
OmegaNum.tetrate(10, 1e15).toString()
"(10^)^999999999999998 10000000000"
becomes
a = OmegaNum.tetrate(10, 1e16)
OmegaNum {constructor: ƒ, array: Array(3), sign: 1}
a.array
(3) [16, 1, 1]
OmegaNum.tetrate(10, 1e16).toString()
"10^^1e16"
in break_eternity.js, the same values are
Decimal.tetrate(10, 9e15)
Decimal {sign: 1, layer: 8999999999999998, mag: 10000000000}
Decimal.tetrate(10, 1e16)
Decimal {sign: 1, layer: 10000000000000000, mag: 10000000000}
But let's say that we wanted to express them in terms of just the biggest two numbers in the array. That is instead of saying 10^^10^16 (1, 1, 16)we said 10^^10^16 (1, 10^16). But we want the restriction that every number is inside of SAFE_INTEGER (if we just make the restriction higher, such as 1e308, we just push the problem back a small amount). However, obviously we can only increase the 1 to 2 - if we could move it to say 1.1, then we'd be required to calculate continuous height pentation, hexation and so on for each hyperoperator, which is obviously absurd. We can figure out that we need the values (2, 2.0806697636736377) by trial and error in break_eternity.js:
Decimal.slog(1e16)
Decimal {sign: 1, layer: 0, mag: 2.0806697636736375}
Decimal.tetrate(10, Decimal.slog(1e16)))
Decimal {sign: 1, layer: 9999999999999836, mag: 10000000000}
Decimal.tetrate(10, Decimal.pow(10, 16))
Decimal {sign: 1, layer: 10000000000000000, mag: 10000000000}
But wait - we've only pushed the problem back one hyperlayer. This approximation only works because we have defined tetrate in break_eternity.js for the reals using the linear approximation. There is no equivalent approximation for every hyperoperator after this.
Conclusion: We've proven that we can't just drop smaller array numbers without the ability to construct and approximate arbitrary hyperoperators over the reals, or without losing a huge amount of precision. At least not down to as few as two array numbers - maybe it becomes OK later? I don't know, I lack the intuition to easily say.
(Incidentally, this is the first time I've used slog in a serious calculation. Cool!)
We can only drop numbers and not lose precision when we can't even hold the number of arrows without losing precision: Which is 9e15. At that point since 10{MSI}10 and 10{MSI+1}10 will be seen equal to the program, it also can't hold number 10{MSI}10{MSI}10 correctly, since range between the first two numbers is all seen equal.
@Patashu I'm just going to make my own library based on this, I would fork this library and do it, but working with other's code is horrible so it will probably take less time this way ;) I'll be using alot of ideas from this and break_eternity to speed things up 'cause I don't want to figure out how to do each function. I'm only doing this now because my idle game might catch up to this library (I've been working on it for almost a month now and it's gone past incremental unlimited) and I've got crazy plans for the future ;)
@Reinhardt-C Good luck! Definitely interested in seeing more numerical libraries around.
@Patashu Making the object from strings is annoying if you want to make it flexible... ugh.
@Patashu Forgot to mention something, in order to (partially) avoid lack of precision, I'm not only storing 2 variable (higher and lower) but 4 in an array, as well as the increment. I'm sure that's enough precision for any incremental ;p
Once you're around 'layer 9e15' (in break_eternity.js terms, 10^ 9e15 times and then mag), the only meaningful operators (because they increase/decrease layer by more than 1 at a time) are hyper-4 (tetrate, slog, sroot, iterated_log) and stronger operators (pentate, hexate, hyper_n, etc). Notably, something like tetrate for a number so large basically just increases layer by whatever number you're iterating to - so at this point the actual value of mag is meaningless (it doesn't give enough information to determine where in between all of the layers).
At that point, it would make sense to ignore mag and split layer into layer_2 and layer_1, and compute all numbers as 10^^10^^10^^ ... (layer_2 times) 10^10^10^10^ ... (layer_1 times) 1 (or 10).
Then when you hit 9e15 in layer_2, by a similar rationale, you could ignore layer_1 and split layer_2 into layer_3 and layer_2.
Or in other words, store the following numbers:
sign, layer_increment, higher_layer, lower_layer
When layer_increment is 0, it reduces to the case of break_eternity.js : sign*10^10^10^ (higher_layer times) lower_layer.
When layer_increment is 1, it's 10^^10^^10^^ (higher_layer times) 10^10^10^10^ (lower_layer times) 1 (or 10).
When layer_increment is 2, it's the same but with an extra ^ on both sides... And so on up until layer_increment 9e15. This would get around your 'I'm restricted by how big of an array I'm willing to lug about' technical constraint and seems like a logical step to take.
(TODO: And then you split layer_increment into layer_layer_increment, higher_layer_increment, lower_layer_increment...? When does the madness end? When you beg for it to stop!)
(Also, as a side note - the reason why I deleted break_break_infinity.js is both to reduce code bloat and because it's redundant. break_infinity.js handles < 1e9e15 cases much better, and break_eternity.js handles everything higher. I've come to realize that at the boundaries of a numerical library, either you don't care about the precision loss (and let's face it, you don't, it's an incremental game and you'll be skipping up exponents then layers then layers layers in the thousands at a time by this point) or you need a stronger numerical library, not a hack job that only gets you slightly further. There's no break_break_eternity.js because if you're at 9e15 layers, you're going up multiple layers at a time to get there and don't care about the precision loss, OR you don't get there at all and want the efficiency more than the precision.)