Closed SimonDanisch closed 1 year ago
Getting to this issue quite late! I should probably remove this indeed---and not because I agree with you! I do, of course, agree with you that Julia is JIT complied, but I don't think this precludes it from being an interpreted language.
Most interpreted languages have compilers, and very many of them have JIT compilers.
Python, which is practically synonymous with "interpreted language" these days, compiles to bytecode at runtime---but indeed, large portions of the bytecode will be cached and are sourced statically without recompiling. Of course, that bytecode is interpreted, but the language being interpreted at this point is not Python. Python also has at least two JIT implementations (Pypy and Numba, but actually more now, I think) as well as numerous statically compiled implementations (Cython, Nuitka, mypyc, Codon, Mojo lang, etc.)
This is to say nothing of all the other "interpreted" programming languages that offer both JIT and statically compiled implementations: Common Lisp, Scheme, Racket, JavaScript, Ruby, Raku, etc. In fact, the only languages I can think of that perform no compilation (in which I'm including bytecode compilation) are POSIX shells and some implementations of AWK.
When I say "interpreted language", I do not mean to imply that no compilation takes place. If I meant that, I would be excluding almost all languages that are popularly referred to with the term.
What I mean is that Julia encourages the same kind of interactive workflow and development cycle as other interpreted languages. There are indeed interactive interpreters for C, but I think most would agree that C is not exactly friendly to an interactive workflow, and certainly that very few people are using C in this way. By contrast, interactive computing is one of the most popular usecases for Julia.
I guess what I'm saying is that defining a technical distinction between interpreted and compiled languages is difficult to define these days. We can certainly observe that the JIT compiler for Julia is more effective than Pypy or JavaScript JITs because it works differently and the semantics are more friendly to static binding, but that doesn't make it "more compiled", per se. However, it is not so difficult to draw a distinction between workflows and culture around interpreted and compiled languages, and Julia very much falls in line with the former. Operationally, Java is similar to other JIT compiled languages like JavaScript (except in that, like Julia, it is more friendly to static binding), but adding the separate compilation step and a stricter, more explicit type system means it has built up a culture more similar to statically compiled languages.
However the fact that we are having this conversation at all indicates that this is a matter of some contention, and there is no reason for this tutorial to concern itself with the existential debate over the meaning of "interpreted" in an age of increasingly complex and subtle language implementation strategies, so I suppose it is better change the sentence.
If Julia doesn't have an interpreter, which is quite a well defined concept, it's not an interpreted language, it's pretty simple like that, no need to overcomplicate things. If you want to emphasize that it still has the same interactive workflow as python, that's what the category of dynamic languages is for ;) To be fair, Julia does have an interpreter for global code execution, but it is much closer to your interpreting c example. Not sure if you followed that development, but there has been a lot of effort to write a fast interpreter for julia, which turned out to be near to impossible currently, for the simple fact that Julia is not at all well designed to be interpreted... So, I find this pretty misleading, since an interpreter comes with some pretty well known and impossible to avoid trade offs (no compile times vs no high performance), which Julia simply doesn't have... And it's really annoying that so many people think Julia is interpreted, because it creates the wrong image of these trade offs for julia, and I've spent quite a lot of time debunking those prejudice… ( not whether Julia is interpreted or not, but people not knowing Julia having really strong opinions about it, since they read somewhere that it's interpreted)
Anyways, you're right that it's not the easiest, introduction friendly topic, without diving into all the subtleties... But writing Julia IS an interpreted language in a tutorial where people just get to know the language just feels pretty misleading, and I've been often at the receiving ends of those who have been mislead ;)
If Julia doesn't have an interpreter, which is quite a well defined concept, it's not an interpreted language, it's pretty simple like that, no need to overcomplicate things.
Julia has an interpreter. Not all portions of code are compiled. There is an interpreter working every time the Julia binary executes. See no. 10 and 11.
Julia is interpreted. No need to over-complicate it, right? In fact, Julia's execution model is complicated, and it involves both compilation and interpretation working simultaneously.
If 99% of the code isn't interpreted in normal usage, and therefore the language doesn't share most of the traits of an interpreted language, I find it really hard to call the language an interpreted language.
So we come back to the fact that once again that this is something of a cultural distinction, rather than a technical one. :rofl:
I like the term "interpreted" for Julia because of what it tells us about the usage and development workflow ("traits" which I find to be quite a significant advantage), and you dislike it because there are negative connotations about performance---connotations which Julia subverts. Julia's performance characteristics are, of course, also a very important trait.
that's what the category of dynamic languages is for ;)
"dynamic" normally refers to the typing discipline, not the the execution model. Historically, there have been statically typed interpreted languages, like the original implementation of CAML. Even today, OCaml and Haskell software is often developed using an interactive interpreter, though the final software artifact will normally be statically compiled to native code. Miranda, the predecessor of Haskell, was also statically typed and interpreted.
I must admit I struggle to think of any dynamically typed language that doesn't have an interpreter, though some also have statically compiled implementations (Nuitka, for example---though Julia also has some support for static compilation)
In any case, I'm changing it!
The main point is, that Julia doesn't need the interpreter. If you would remove it, Julia would still be the same language with the same traits that you like...It would just a bit harder to implement the bootstraping. And interpreted means by definition, that the language wont ever be as fast as Julia, because it does extra steps in-between running the program and executing it on the CPU (the interpretation). So, in other words, every interpreted language that will ever exist, by definition will never be able to reach Julia's performance and the only way to do so would be to remove the interpreter. So this is a property (trait) that's applicable to any interpreted language. In this context, to say that Julia is an interpreted language because you can do the same interactive workflow as Python, is like calling everything that can swim a dog, just because dogs can also swim.
Saying Julia is interpreted, but then assure that it's not as slow as all other interpreted language would force you into saying, "Well, I'll call Julia interpreted, but, uh, it actually compiles the code at runtime, so don't worry about performance, it's faster than any interpreted language".
"dynamic" normally refers to the typing discipline, not the the execution model
That's not true, "dynamically typed" is something different from a dynamic programing language. If you read the wikipedia article about what a dynamic language is, you will see that it fits Julia perfectly - and has all the nice traits that you're looking for (and Julia quite famously makes the list of dynamic languages)! A few more points if you're still not conviced:
Because Julia's compiler is different from the interpreters used for languages like Python or R
The main point is, that Julia doesn't need the interpreter. If you would remove it, Julia would still be the same language with the same traits that you like...It would just a bit harder to implement the bootstraping.
And runtime eval, and the repl. No programming language "needs" an interpreter. It's a choice about implementation to enable a certain kind of workflow. It's a choice that the creators of Julia made because they wanted those things.
And interpreted means by definition, that the language wont ever be as fast as Julia, because it does extra steps in-between running the program and executing it on the CPU (the interpretation).
Absolutely not true. Numerous interpreted languages have JIT compilers. This is not a dichotomy. JIT compilation is, for the most part, a technology to enable the creation of fast interpreters. A language can be compiled and interpreted. At least one interpreted language implementation is very competitive with Julia's performance (LuaJIT). Check out the benchmarks.
In the case of interpreters with JIT compilers, the reason most of these languages can't be as fast as Julia is not because an interpreter is running (which is also true in every Julia process), but because their semantics are not friendly to static binding or machine arithmetic. Unlike the languages to which Julia is frequently compared, Julia was designed for speed: the type system is friendly to static binding, the numeric types are friendly to machine arithmetic, and the memory model is friendly to the CPU cache and SIMD operations. Julia was designed for speed which most interpreted languages are not.
That's not true, "dynamically typed" is something different from a dynamic programing language. If you read the wikipedia article about what a dynamic language is, you will see that it fits Julia perfectly - and has all the nice traits that you're looking for (and Julia quite famously makes the list of dynamic languages)!
I didn't know this! Thanks for the correction. It's new information for me.
Regarding you list of bullet points, I will only deal with the last one, since the others are arguments from silence.
The Julia docs specifically say it's different from interpreted languages
The Julia docs also discuss the Julia interpreter and its role at runtime.
As for the excerpt you cite, while I wouldn't go so far as to call it false, it is a simplification. Julia's execution model is certainly very different from Python's standard implementation, CPython, which compiles to bytecode to be executed on the CPython virtual machine.
It is less different from Pypy---though still quite different. Pypy uses a profiling JIT and, last I heard, Julia doesn't. It uses type inference and other heuristics for static binding. The main similarity is that both generate machine code at runtime, but the kind of machine code they generate is still very different.
The Numba JIT for CPython, as far as I understand, actually works more similarly to Julia and can produce similarly impressive performance at runtime, but only works on a small subset of valid Python code and is manually applied to certain functions by the user. You also have to use it with NumPy to get some of Julia's benefits (arrays of inline machine numbers, necessary for cache optimization and SIMD operations)
The Nuitka compiler for Python is very different from Julia because it doesn't interpret code at all. It statically compiles everything. The performance is still crap because it retains all of Python's dynamic semantics and the machine still has to do the same amount of work at runtime, even though everything is statically compiled. You maybe get a 1.5x speedup from eliminating the bytecode VM. I think this is the most telling example. Compilation is not magic pixie dust. Code still has to be susceptible to optimization, whether it is interpreted or compiled or both.
Other static compilers for Python like mypyc do offer performance boosts because they use type hints to do more static binding, but you're still dealing with Python's abstract numeric types and Python's general memory model, which is super unfriendly to CPU-level optimizations. (every object just lives on some random spot on the heap)
As I said, if you want to make an interpreted language fast, you need to remove the interpreter, turning it into a compiled/jitted language. LuaJIT is not an interpreted language, and Numba JIT / CPython neither... Just because Python originally is interpreted, doesn't mean that if you build a compiler/JIT for Python, that the compiler suddenly turns into an interpreter.
(CPython is the standard implementation of Python. I think you meant Cython? The naming situation is quite confusing.)
Ah yes! And PyPy and friends ;)
Also, the JIT compilers I refer to do not "remove the interpreter". They function together with the interpreter.
And runtime eval, and the repl. No programming language "needs" an interpreter. It's a choice about implementation to enable a certain kind of workflow. It's a choice that the creators of Julia made because they wanted those things.
Exactly, and Julia doesn't need the interpreter for these features ;) You could turn off Julias interpreter and still have runtime eval and a REPL.
Show me how.
REPL code can be parsed then compiled like any other julia code? It was my understanding that the julia interpreter is rarely used.
To demonstrate, run ProfileView on a block of this kind code in the REPL. You will see a full flame of compiler code inspecting all the types of the whole abstract sytax tree, and compiling the whole block of code together.
No line-by-line interpretation will happen - the whole graph is compiled and possibly inlined in one go, then run. Python/R/Ruby etc don't do that.
Julia by default is dynamic, but not interpreted, although it can be interpreted.
Python, R and Ruby also don't do line-by-line interpretation. All the code is parsed and compiled to bytecode right away.
>>> def add(a, b):
... return a + b
...
>>> import dis
>>> dis.dis(add)
1 0 RESUME 0
2 2 LOAD_FAST 0 (a)
4 LOAD_FAST 1 (b)
6 BINARY_OP 0 (+)
10 RETURN_VALUE
You can even see how the bytecode changes as some (very rudimentary) profiling is done.
>>> for i in range(10):
... add(i, i)
...
>>> dis.dis(add, adaptive=True)
1 0 RESUME_QUICK 0
2 2 LOAD_FAST__LOAD_FAST 0 (a)
4 LOAD_FAST 1 (b)
6 BINARY_OP_ADD_INT 0 (+)
10 RETURN_VALUE
It's kinda nifty---though not relevant to the conversation. I just learned about the adaptive interpreter in 3.11 and I'm having fun poking the bytecode these days.
Here's how to show the compilation steps in Julia.
julia> function f(x)
for i in 1:10
x += i^4
end
return x
end
f (generic function with 1 method)
julia> @code_lowered f(5)
CodeInfo(
1 ─ x@_5 = x@_2
│ %2 = 1:10
│ @_3 = Base.iterate(%2)
│ %4 = @_3 === nothing
│ %5 = Base.not_int(%4)
└── goto #4 if not %5
2 ┄ %7 = @_3
│ i = Core.getfield(%7, 1)
│ %9 = Core.getfield(%7, 2)
│ %10 = x@_5
│ %11 = Main.:^
│ %12 = i
│ %13 = Core.apply_type(Base.Val, 4)
│ %14 = (%13)()
│ %15 = Base.literal_pow(%11, %12, %14)
│ x@_5 = %10 + %15
│ @_3 = Base.iterate(%2, %9)
│ %18 = @_3 === nothing
│ %19 = Base.not_int(%18)
└── goto #4 if not %19
3 ─ goto #2
4 ┄ return x@_5
)
julia> @code_typed f(5)
CodeInfo(
1 ─ goto #7 if not true
2 ┄ %2 = φ (#1 => 1, #6 => %12)::Int64
│ %3 = φ (#1 => 1, #6 => %13)::Int64
│ %4 = φ (#1 => x@_2, #6 => %6)::Int64
│ %5 = invoke Base.power_by_squaring(%2::Int64, 4::Int64)::Int64
│ %6 = Base.add_int(%4, %5)::Int64
│ %7 = (%3 === 10)::Bool
└── goto #4 if not %7
3 ─ goto #5
4 ─ %10 = Base.add_int(%3, 1)::Int64
└── goto #5
5 ┄ %12 = φ (#4 => %10)::Int64
│ %13 = φ (#4 => %10)::Int64
│ %14 = φ (#3 => true, #4 => false)::Bool
│ %15 = Base.not_int(%14)::Bool
└── goto #7 if not %15
6 ─ goto #2
7 ┄ %18 = φ (#5 => %6, #1 => x@_2)::Int64
└── return %18
) => Int64
julia> @code_llvm debuginfo=:none f(5)
define i64 @julia_f_137(i64 signext %0) #0 {
top:
%1 = call i64 @j_power_by_squaring_139(i64 signext 1, i64 signext 4) #0
%2 = add i64 %1, %0
%3 = call i64 @j_power_by_squaring_139(i64 signext 2, i64 signext 4) #0
%4 = add i64 %3, %2
%5 = call i64 @j_power_by_squaring_139(i64 signext 3, i64 signext 4) #0
%6 = add i64 %5, %4
%7 = call i64 @j_power_by_squaring_139(i64 signext 4, i64 signext 4) #0
%8 = add i64 %7, %6
%9 = call i64 @j_power_by_squaring_139(i64 signext 5, i64 signext 4) #0
%10 = add i64 %9, %8
%11 = call i64 @j_power_by_squaring_139(i64 signext 6, i64 signext 4) #0
%12 = add i64 %11, %10
%13 = call i64 @j_power_by_squaring_139(i64 signext 7, i64 signext 4) #0
%14 = add i64 %13, %12
%15 = call i64 @j_power_by_squaring_139(i64 signext 8, i64 signext 4) #0
%16 = add i64 %15, %14
%17 = call i64 @j_power_by_squaring_139(i64 signext 9, i64 signext 4) #0
%18 = add i64 %17, %16
%19 = call i64 @j_power_by_squaring_139(i64 signext 10, i64 signext 4) #0
%20 = add i64 %19, %18
ret i64 %20
}
julia> @code_native debuginfo=:none f(5)
.text
.file "f"
.globl julia_f_153 // -- Begin function julia_f_153
.p2align 4
.type julia_f_153,@function
julia_f_153: // @julia_f_153
.cfi_startproc
// %bb.0: // %top
stp x29, x30, [sp, #-32]! // 16-byte Folded Spill
str x19, [sp, #16] // 8-byte Folded Spill
mov x29, sp
.cfi_def_cfa w29, 32
.cfi_offset w19, -16
.cfi_offset w30, -24
.cfi_offset w29, -32
mov x19, x0
mov w0, #1
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #2
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #3
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #4
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #5
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #6
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #7
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #8
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #9
mov w1, #4
bl j_power_by_squaring_155
add x19, x0, x19
mov w0, #10
mov w1, #4
bl j_power_by_squaring_155
add x0, x0, x19
ldr x19, [sp, #16] // 8-byte Folded Reload
ldp x29, x30, [sp], #32 // 16-byte Folded Reload
ret
.Lfunc_end0:
.size julia_f_153, .Lfunc_end0-julia_f_153
.cfi_endproc
// -- End function
.section ".note.GNU-stack","",@progbits
I had to use the 4th power since squaring it ending up in a compile time calculation.
julia> function g(x)
for i in 1:10
x += i^2
end
return x
end
g (generic function with 1 method)
julia> @code_llvm g(5)
; @ REPL[16]:1 within `g`
define i64 @julia_g_455(i64 signext %0) #0 {
top:
; @ REPL[16]:2 within `g`
%1 = add i64 %0, 385
; @ REPL[16]:5 within `g`
ret i64 %1
}
julia> @code_native g(5)
.text
.file "g"
.globl julia_g_457 // -- Begin function julia_g_457
.p2align 4
.type julia_g_457,@function
julia_g_457: // @julia_g_457
; ┌ @ REPL[16]:1 within `g`
.cfi_startproc
// %bb.0: // %top
stp x29, x30, [sp, #-16]! // 16-byte Folded Spill
mov x29, sp
.cfi_def_cfa w29, 16
; │ @ REPL[16]:2 within `g`
.cfi_offset w30, -8
.cfi_offset w29, -16
add x0, x0, #385
; │ @ REPL[16]:5 within `g`
ldp x29, x30, [sp], #16 // 16-byte Folded Reload
ret
.Lfunc_end0:
.size julia_g_457, .Lfunc_end0-julia_g_457
.cfi_endproc
; └
// -- End function
.section ".note.GNU-stack","",@progbits
@ninjaaron your example is showing bytecode rather than machine code.
Julia compiles to native code which a cpu can execute directly, as you can see above.
Python bytecode still needs to be interpreted to machine code later (like every time the loop iterates), its not something a cpu can execute directly.
Thats a large part of why the python loop will be slow, and the julia loop fast.
Python bytecode still needs to be interpreted to machine code later (like every time the loop iterates), its not something a cpu can execute directly.
We may need to more careful when we say this. There are a large number of Python compilers out there at the moment. In particular, LPython or Cython 3.0's Pure Python mode. A key ingredient here is to either explicitly use machine types or map Python types to machine types (int
to int64
).
Python is really becoming a metalanguage where you reuse the Python syntax to write in different languages, each which may implement very different semantics.
@rafaqz Yes. I fully understand the difference between CPython and Julia's execution model. I have recently heard the contention that bytecode is a form of machine code, since the bytecode interpreter is often called a "virtual machine", and we know that there have been actual machines which execute JVM bytecode, for example.
But I think your point stands. CPython adds an extra layer of indirection between the what the interpreter/VM does and what is actually executed on the machine. No question about that.
I would add that machine code vs. bytecode is not the only reason Julia loops are faster. The real reason is that more machine instructions (and memory loads, very important) are executed to achieve the same thing. All the indirection in Python ultimately means simply that the machine has to do more work. As previously discussed, even statically compiled implementations of Python are nowhere near the speed of Julia. If you look at compiled output, you will see nothing but machine instructions.
It's still slow as balls because there is no way to do static binding on dynamically typed Python and you still need runtime method lookups and lots of runtime checks to execute the code. Plus, Python objects are heap-allocated, so there is a ton of time spent loading stuff from RAM that would be in the cache or directly in registers in Julia.
Compilation matters, but not nearly as much as language semantics.
I'm not 100% sure, if this is on purpose, emphasizing on the "you probably know" without clarifying that it's wrong, but Julia is a just in time compiled (JIT) language... If you want to emphasize that it's dynamic, I'd say a
dynamic, just in time compiled (JIT) language
.Julia does have an Interpreter, but I guess if a classic statically compiled language like C offers an interpreted mode, you wouldn't start calling it a "interpreted language" ;)