Closed Nidrop closed 1 year ago
Compiling nim comprises of 3 stages: generating C files, generating object files and finally linking the object files - nlvm
skips the C stage and gains some performance, then loses it back because it indeed doesn't reuse object files then it gains it again because of other pipeline improvements such as a better linker by default.
It depends on the code and the changes between each compile, which approach is faster.
An interpreter is available in https://github.com/arnetheduck/nlvm/pull/31, based on ORC which is more or less what lli does.
Firstly, if nlvm creates one object file, does that mean that the original nim will be faster when recompiling because it can reuse object files? If not, would an nlvm-based interpreter be efficient in theory? I think it is possible via LLVM lli