Closed determin1st closed 6 years ago
no, there could be side effects.
yes, there will be no side effects.
look, it's a little bit faster: https://jsperf.com/livescript-array-star-access
.array
may be a getter on object
. Your version would trigger these side effects twice.
if somebody made Array getter on that object and then, he use it to access with [*], he should suffer and stop doing that as fast as possible, do you agree? What's the point of exposing (or using) such a badly designed API?
let object be universal accessor - proxy object for example. so, if you read api reference and you know that it returns array, then you will create reference yourself, right? like array = object[arrayKey]
there is no way to know what kind of object is, inside livescript parser, so why imagine "side effects".
data structures are "lower"/simplier/more natural that objects with api.
also, this:
(array = object[arrayKey])[*] = 1
compiles to:
(ref$ = array = object[arrayKey])[ref$.length] = 1;
and could be:
(array = object[arrayKey])[array.length] = 1;
see, reeeeal optimization.
Side effects may include headache, dry mouth, and mutations.
Some people have been arguing for considering the possibility of side effects in even more cases, so clearly this is a case of not being able to please everyone (without macros/code rewriting or more compiler options, the latter of which I don't think is worth it here). The guiding principle I've been using is that non-atomic expressions that appear exactly once in the LiveScript source will appear exactly once in the JavaScript output, and object.array
is not an atom. Something like Google's Closure Compiler, or a different post-LiveScript minifier/optimizer, might be better suited for such optimizations as inferring when it's safe and more efficient to replace a temporary variable with a repeated property access, if you really need (and your real-world application allows you to get) that extra 2% of speed.
Now, your latest suggestion (reusing an explicitly declared array
variable instead of a new ref$
variable) is something that LiveScript 1.6 already does—try it here. (1.6 is still in prerelease, which is why it isn't on livescript.net yet—hopefully it will be released to npm soon... we're just waiting on gkz to review and pull the trigger, at which point the website update will happen speedily, I promise.)
Okay, captain locked up in the cabin and don't respond. You have neutral position, but, imo, able to sail new ship - I don't like any strictypish-compilers from corporate minds.. LiveScript is superior.
object.array is not an atom
yes, it's not an atom without context. you may miss, that:
a = {
"0": 0
"1": 1
count: 2
}
b = new WeakMap!
c = "string"
a[*] = 1
b[*] = 1
c[*] = 1
will not work and should not be considered "atoms", right? so [*]
is what makes it "atom". Array-like object as access result. Let's assume that:
or
It's kind of assuming "people smart" vs "people dumb". They will eventually became "smart" if they manage to do it right, but only if there is no language limitations to that. So, if it is used smart way, it will be always faster in any compiler - (variable created + 2read + assigned + 1read) VS (2read + 2read). It is not essential issue, so im not affected, but glad that 1.6 will fix (array = object[arrayKey])[*] = 1
I'm afraid you lost me in the middle there; I think we're having some language difficulties. The way you're using the word ‘atom’ makes me think of atomic transactions, as in databases, where a set of actions is taken in an indivisible way. When I said object.array
is not an atom, I meant that as an expression, it is divisible into parts: object
, .
, and array
. It's atomic expressions—expressions like "some string"
, some-variable
, or true
that are treated by the language as indivisible units—that LiveScript is allowed to duplicate in the compiled code.
Of course none of the a[*] = 1
, etc. assignments in your comment will work, but that's because a
and b
don't have a length
property to read, and c
is not an object and therefore can't have properties set on it, and neither of those reasons have anything to do with the atomicity (as expressions) of a
, b
, and c
, or the ‘atomic transaction’-like nature of [*]
. So I don't understand what point you're trying to make.
Would you like to try rephrasing your argument? Or, since you say this isn't an essential issue, can we close this?
(Also, I don't agree with your claim that repeating object.array
will always be faster in any JavaScript environment—a more accurate accounting would be variable created + 2×variable read + 2×property read + variable assignment versus 2×variable read + 3×property read, so it comes down to whether one additional property read is more expensive than a variable creation and assignment. I'm not an expert on JS engine internals, but in compiled C code, for instance, variable creation is free, and a local assignment is either a write to stack memory or, in the best case, a register write. Property access, in the best case, is a pointer dereference, but it could also involve a hash table lookup, walking up a prototype chain, or invoking a getter. So I would expect that both different JavaScript JIT compilation implementations and different types and locations in memory of objects would change which approach is faster.)
I will start from the end of your comment. There may be no discussion about the speed - it's already faster in both browsers (as the test shows). Yes, there may be some "out of nowhere" JS engines/interpreters that won't follow this simple logic - but it's futile to project further. The speed benefit is negligible, that's why i don't hold to it. But as a language maker, you may do 10 more "fixes" like this, and, it will be more "weighted" then.
Close, no problem.
The point is - errors. Let's imagine, that the access routine from object to array exists and may not return array. It will not work - ref$ may become some "string", some "null" or anything else. Is this considered "side effect"? Even more, it may return "weird" object that will produce no error with assignment, but will blow on .length access. - If you accept that, why not interpolate it to the reasonable end? Both variants (with ref$ and without ref$) can fail and both can produce "side effects". The difference is in probabilty, right? So, as you don't agree with the speed, i don't agree with the security (or clearness, or safety, or i dont know how it is called) benefit, it is also negligible to me.
Throwing errors is not usually considered a ‘side effect’, no. Side effect here refers to some action that an array
getter on object
could take that would be different if it happened twice instead of once. For example, try running this:
log = []
object =
secret-array-field: [1 to 5]
array: ~
-> log.push "fetching array"; @secret-array-field
object.array[*] = 1
log #=> ["fetching array"]
The array getter has a side effect, which should only be triggered once because object.array
only appears once in the code. If we instead write:
object.array[object.array.length] = 1
log #=> ["fetching array","fetching array"]
the side effect would be triggered twice. If object.array[*]
got rewritten to object.array[object.array.length]
instead of (ref$ = object.array)[ref$.length]
, that would transform an expression that triggers the side effect once into an expression that triggers the side effect twice. That's why using the ref$
variable is more correct in any case where a non-atomic LiveScript expression would be used more than once in the compiled JavaScript.
Okay, i understand. No side effects. Let functional approach wins here.
I don't quite... how is ~
thing works.. it makes a getter.. hmm..
object =
secret-array-field: [1 to 5]
array: ~
-> log.push "fetching array"; @secret-array-field
object.array[*] = 1
Still, I don't agree that it is correct/appropriate use case for [*]
operator..
programmer should know what object is and do this instead:
(a = object.array)[*] = 1
Yeah, better close the issue, comments run in cycle)
(The ~
feature is documented way at the bottom of the OOP section of the docs, FYI. It's easy to miss.)
Okay, closing the issue then. Thanks for your input; even when nothing changes, it's good to know what features people find controversial.
for example:
compiles to:
maybe optimize it to:
but, if there more than one
.
dot, compile it with reference. what you think?