Closed Araq closed 4 years ago
You could define a standard macro that does node = nil, to hide this unsightly sight.
Not being familiar with C++, I am confused by the direction Nim is taking. I am confused about the whole new runtime thing, and I am a little worried seeing how much effort goes into inventing new semantics and patching the standard library to work in both modes, instead of making Nim ready for a 1.0 release, with garbage collection, thread-local heaps and a shared unsafe heap as it was advertised before.
Do not misunderstand me: I think the semantics of Nim about mutability is not great now, and fixing that will probably require some form of ownership concept. But it seems to me that the new runtime is an experiment of which the semantic is not formally proved to work and something that can open a myriad of new bugs.
Recently, we have seen new work about
These are not small changes: they are quite fundamental distinguishing features of a language. I may even agree that these are useful directions to explore. But it leaves me worried about what will happen for Nim as I know it now. A language that uses garbage collection, freeing me from reasoning pervasively about ownership, which I can compile to C (not C++), which has a limited but simple threading model which I can easily reason about
I am a little worried seeing how much effort goes into inventing new semantics and patching the standard library to work in both modes, instead of making Nim ready for a 1.0 release
IMO, it makes sense to do these changes before 1.0 release because -
they are quite fundamental distinguishing features of a language.
But it leaves me worried about what will happen for Nim as I know it now.
As far as I understand, destructors and owned refs are optional features that you may or may not choose to use. And they provide better safety as well as optimization opportunities wherever required.
But incremental compilation should be postponed after v1. Instead (after destructors and owned refs are implemented) one release cycle should focus on bugfixes only.
@andreaferretti I share your fears and this is indeed all stuff for version 2. Having said that, if we release v1 as it is with breaking changes in the future for v2 then why did we even take so long for v1. v1 took so long we might as well put in the extra effort and get the language into a shape we're confident it'll stand the test of time.
Also, most regressions are not even due to feature changes. Most regressions are the result of bugfixes. That's terrible but apart from testing ever more things I'm out of ideas how to deal with this problem.
Having said that, when was the last time we made Nim worse? I don't see it, nil for strings and seqs is gone (yay), func
started to become a thing (people like it), toOpenArray
was added, exceptions started to work with async, tons of bugs have been fixed, the spec became more refined, the documentation improved and we put a lot of effort into our testing infrastructure.
But incremental compilation should be postponed after v1. Instead (after destructors and owned refs are implemented) one release cycle should focus on bugfixes only.
Completely agree, if we improve the DLL/static lib generation stability that also offers a way out of the increasing compile-times.
Having said that, when was the last time we made Nim worse?
Uh, sorry for the misunderstanding, I never claimed that! In fact, I have seen many improvements :-)
It's just that new features are piling and I am not sure that the language that will be standardized as v1 will resemble much the original vision of Nim (let's say Nim as we know now)
As far as I understood, the GC is still there in V1, how would the transition work?
AFAIK destructors are replacing the old =destroy
that didn't really work so we are not changing usual Nim semantics with destructors but refining them.
However owned refs are quite different indeed.
The macro to hide = nil
can be called dispose
;)
Of course the GC is there in v1, to make a transition possible. But a transition to what? To a language without GC? This is unfortunately still not clear to me.
To a language without GC? This is unfortunately still not clear to me.
Yes but it's easy to misinterpret when you put it this way. Memory management is still mostly declarative and automatic. And it's not like the existing GC frees you of memory managment problems, you can easily have a cache that keeps growing in size and becoming a logical memory leak. (I have seen this happening in production a lot of times). In addition to that the GC doesn't close your sockets etc reliably, the new runtime would.
The proper term is resource management
, resource being something that cannot be trivially copied/aliased:
Though putting file.close() in a finalizer kind of works at the moment.
If I understand correctly, the reference counting would default to being disabled (in release builds?). If yes, do I understand correctly, that it is essentially a similar level of danger as disabled bounds checking? (Which is also disabled by default in release builds, right?)
If yes, could there be some "middle" level of compilation, with optimizations as in release build, but with bounds checks + owned refcounting enabled? Say, "memory-safe release", or "bounds-checked release", or something? (Name totally open to being bikeshedded.) Or at least some flag for the release build, that would enable both checks at once? I mean, if I had bounds checks enabled in release builds as of now (is it even possible?), I wouldn't know I need to add another flag to "be safe" without reading this RFC. Generally, I would very much like an idea of a well advertised "safe" mode of compilation, for people who want some benefits of a "release" build, but "safety" is paramount to them, who are willing to trade some performance if this means keeping as much safety as possible (in hope of avoiding heartbleed-style bugs, etc.).
@akavel Yes, completely agree and I consider it part of this proposal. You can use --checks:on
with --opt:speed --lineTrace:off
for servers / critical software. You always could. We need to communicate it better.
Also you can disable the checking only for the performance critical parts with {.push checks:off.}
...{.pop.}
.
Furthermore https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/Undangle.pdf suggests it is a good tradeoff to disable the ref counting for stack slots:
e. It also shows that dangling pointers stored in the stack and registers are specially short-lived; at use time all but one dangling pointers are stored in the heap
@akavel
You can enable bounds checking or any other checking in release build.
Check nim --fullhelp
->
nim --fullhelp | grep bound
--boundChecks:on|off turn bound checks on|off
Also see config/nim.cfg in the Nim install dir, and search for release
in it, you can enable the checks there, or create a release-safe
and send a PR to nim ;D
@Araq Thanks! I'm only not yet clear what exactly are you referring to by "it"; in particular, as part of the proposal, are you intending to add something like the release-safe
mentioned by @nc-x, or maybe would you be OK with me/someone sending a PR with an attempt to do it? Or you don't want to add something like this?
I wanted to add -d:safe
for a long time but it's very orthogonal to this RFC because in this RFC I propose --refChecks:on/off
which is included by --checks:on/off
just like all the other runtime checks we perform.
Out of curiosity, how hard would it be to write a prototype of this runtime? I'm a little cautious about this feature for a lot of the same reasons as @andreaferretti mentioned, but I can see the potential benefits as well. Playing with a prototype would be really helpful in understanding how Nim would change.
As a small aside, I like this proposed syntax for unowned refs better: https://forum.nim-lang.org/t/4743#29636
I posted this on IRC, but I thought it would be good to post it here as well:
How is this different from the unique_ptr
and raw pointers that C++ offers? It seems that an owned reference is equivalent to a unique_ptr
, and an unowned reference equivalent to a raw pointer. The only difference is lack of shared_ptr
, and optional runtime debug checks (which are largely equivalent to using valgrind, purify, or the various memory sanitizers available).
Let say I have a piece of important, complex, multithreaded server software that I'm using in a commercial product. It's important that it not crash unexpectedly or corrupt memory, otherwise I'll have angry customers (and an angry manager) to deal with.
--checkRefs:on
, and some weird threading race condition occurs that causes an unowned ref to live beyond the lifetime of its object, then the server program aborts.--checkRefs:off
, and the same situation occurs, the program might or might not continue on beyond the race condition. At best, nothing bad will occur. At worst, I'll have some form of hard-to-trace memory corruption that causes a segmentation fault somewhere random. Both these outcomes are rather unappealing - I'm facing a higher risk* of programming oversights that lead to random program aborts and memory corruption.
This change would appear to actually put Nim in a situation that's worse than C++ with regards to memory management - at least C++ has the shared_ptr
type! For this new runtime, one would have to construct a shared_ptr type on top of plain reference types. I don't know how difficult that would be to implement, but I would be willing to bet that there will be some hard-to-handle corner cases.
Look at all the commonly used programming languages - Python, Java, Ruby, Javascript, C#, C++, C, PHP, Go. Out of all of those, only 2 have memory management schemes that involve manual or semi-manual memory management mechanisms of the sort being proposed. Even Rust has something that, while not exactly as automatic as a garbage collector, is at least as surefire as one.
I know there are alternatives out there. What about doing pure reference counting, and trying to detect possible reference cycles at compile time using the type graph? or special-casing reference cycles so that users have to mark a break point?
At the end of the day, I think the decision on whether to implement this proposal comes down to who and what Nim is targeting. If Nim is supposed to be used as a general programming language, in the same areas as Go, C#, Java, and Python, then it needs to have reliability, in addition to performance. However, if Nim is supposed to be used primarily in embedded or HPC systems, where assembly, C, and C++ are the only suitable candidates, then perhaps performance needs to be the only concern.
Personally, not having to worry about memory corruption or resource leaks** is one of the features that drew me to Nim. Having a language that had the speed of C, with the reliability and ease-of-use of Python, was what I was looking for.
* Relative to Nim's current semantics ** Beyond bad cache logic and such
which are largely equivalent to using valgrind, purify, or the various memory sanitizers available
No, they are not. The difference is that the scheme detects dangling pointers when they exist, not when they are deref'ed. That's something that valgrind, purify etc cannot detect because it would violate C's semantics. Whether that is in practice a big difference or not remains to be seen.
Let say I have a piece of important, complex, multithreaded server software that I'm using in a commercial product. It's important that it not crash unexpectedly or corrupt memory, otherwise I'll have angry customers (and an angry manager) to deal with.
We don't have many of these highly reliable multithreaded servers in Nim. In practice the existing GC plus the various complex interactions with the OS's APIs mean that we have more unreliability than with a simpler more deterministic solution. Proof: Look at Nim's issue tracker.
This change would appear to actually put Nim in a situation that's worse than C++ with regards to memory management - at least C++ has the shared_ptr type!
No, the shared_ptr type is worse than a safer unique_ptr as you need to watch out you don't create cycles.
Personally, not having to worry about memory corruption or resource leaks is one of the features that drew me to Nim.
You can have plenty of resource leaks, the GC only collects memory.
If I compile it with --checkRefs:on, and some weird threading race condition occurs that causes an unowned ref to live beyond the lifetime of its object, then the server program aborts.
The point is that you get a chance to detect and correct the weird threading race condition in that mode.
I do support adding a shared_ptr equivalent but I am sure it is not needed right now.
@Varriount IMO, this PR paves the way for safe and overhead free memory management. Reference counting adds noticeable overhead especially if you consider atomic inc/dec instructions that have 18-25 cycles delay (https://www.agner.org/optimize/instruction_tables.pdf).
If you do have use case where you have multiple references and there is no way you can say which one is the owner, all references have 100% dynamic life time (improbable but possible use case) then shared_ptr
and its overhead become justifiable. shared_ptr
in Nim is possible even now, if you need it. See PR: https://github.com/nim-lang/Nim/pull/10485. Though, I don't think everyone needs to pay reference counting overhead.
@cooldome How does this pave the way for anything safer than what Nim currently has?
As the language currently stands, use-after-free and memory corruption bugs are practically non-existent - one has to be using ptr
types, interfacing with C code or using some other explicitly unsafe mechanism to cause them.
This proposal would change that. Any program using references could exhibit those bugs.
Rebuttals to this fact seem to be the following:
With regards to the first point, unfortunately we do not live in a perfect world. Code gets written all the time that isn't properly tested, whether out of laziness, or because an individual simply doesn't have time to. In many situations, it is also incredibly difficult to write comprehensive tests (such as when code relies heavily on external data, such as a REST API); there is only so much mocking and separation one can do.
With regards to the second point, without drastically changing the language, no amount of static analysis could ever hope to find all the situations in which use-after-free situations could arise. To do so, one would need to solve the halting problem. Even finding some of those situations will be hard, especially when one considers how multithreading can affect when parts of a program's logic (and therefore memory allocation, deallocation, and accesses) may run.
I would much prefer just biting the bullet and using atomic reference counting and cycle detection. If a program is too slow, I throw more computing power behind it. If I can't do that, then I can resort to using raw pointers and the risks they bring (and let's not kid ourselves here, that's what this proposal is all about, turning the majority of references into a kind of raw pointer type).
Technical implementation aside, what doesn't seem to be considered here is how this behavior will be perceived by those evaluating Nim. Most commonly used programming languages don't have the possibility of use-after-free or memory corruption bugs. The worst thing most languages have that's even remotely similar is null pointer/reference errors.
How will it look to those coming from C#, Javascript, etc. that they now have to put more thought into how long an object will live? "Why switch from Javascript to Nim, and face errors I've never seen before, when I could switch instead to something like Go?"
I'm not saying that Nim should just become a clone of another language, but there is a limit to what people are willing to consider.
With regards to the second point, without drastically changing the language, no amount of static analysis could ever hope to find all the situations in which use-after-free situations could arise. To do so, one would need to solve the halting problem.
That's wrong. Almost always whenever somebody brings up the halting problem it's wrong. What generally happens is that the analysis is pessimistic, but safe. For example
var s = "string"
if haltingProblem:
s = 34
The Nim compiler does not allow s = 34
and it doesn't have to solve the halting problem which is encoded in the condition.
I would much prefer just biting the bullet and using atomic reference counting and cycle detection. If a program is too slow, I throw more computing power behind it. If I can't do that, then I can resort to using raw pointers and the risks they bring (and let's not kid ourselves here, that's what this proposal is all about, turning the majority of references into a kind of raw pointer type).
Atomic reference counting with cycle detection is one of the slowest GC algorithms you could come up with! And if you don't mind its overhead nobody is stopping you to leave on refchecks
all the time for everything.
and let's not kid ourselves here, that's what this proposal is all about, turning the majority of references into a kind of raw pointer type
Pure FUD.
"Why switch from Javascript to Nim, and face errors I've never seen before, when I could switch instead to something like Go?"
How is that different from today where thousands of programmers already picked Go and not Nim? And since when is performance not important to have and a feature of its own? Most existing users of Nim picked it - among other things - for its performance.
One of the best qualities of Nim is that most of the time it is a strict superset of the capabilities of any other language - that is, any code that you can imagine in Ruby, JavaScript, C++ or Malbolge can have an equivalent representation in Nim, composed of roughly the same abstractions, expressed with similar elegance.
Completely eliminating the GC would lose us this quality because certain APIs rely on the existence of a GC. With that said, I don't see this proposal as a definitive plan to eliminate the GC from Nim, but rather as a way to greatly increase the number of programs that can be written without one. In particular, the standard library of Nim will be written with more care regarding resource management and the result will be that many user programs will become smaller and more efficient. Where the nature of the problem still requires more ad-hoc sharing, I think Nim can still provide a shared ref
pointer type in the future that will trigger the inclusion of the current GC in your program. We don't have to lose what we already have.
@araq, Is there a branch where you're working on this? Or when would you expect a somewhat working (or at least building) prototype? That might make the discussion more concrete.
The prototype implementation hides behind the --newruntime
switch. But please don't report bugs with it yet, it's too early. We hope to have a prototype within the next weeks but no promises.
So if I understand correctly what you're proposing are optional annotations that allow the GC to be turned off, is that correct?
What follows is that libraries need to be explicitly written to support these annotations, right? So we will have two different ways to do things in Nim and end up in situations where libraries are written without a care in the world that they use a GC, and that will mean my GC-free app that uses owned
won't be able to use those libraries. Is my assumption correct?
That's wrong. Almost always whenever somebody brings up the halting problem it's wrong. What generally happens is that the analysis is pessimistic, but safe. For example
var s = "string" if haltingProblem: s = 34
The Nim compiler does not allow s = 34 and it doesn't have to solve the halting problem which is encoded in the condition.
Yes, because of type checking, which can be done at compile time. Ref count checking to catch use-after-free cannot, so this is not an analogue.
As to test coverage for a ref counting debug build: it's good that we don't have to test all the control flow paths which deref the owned ref. But since the abort could be triggered by any invalidation of the memory behind the owned ref, we do have to test all flow paths which do that, is that correct? If it is, these would be cases where the owned ref
Do we have any metrics for the effort this would take? It "feels" easier than covering derefs, though.
... libraries are written without a care in the world that they use a GC, and that will mean my GC-free app that uses
owned
won't be able to use those libraries. Is my assumption correct?
AFAIU, your owned-ref-aware app code could be compiled in use-GC-mode together with the non-owned-ref-aware lib code. The owned
(and maybe dispose
) keywords in the app code could just be ignored then.
But since the abort could be triggered by any invalidation of the memory behind the owned ref, we do have to test all flow paths which do that, is that correct?
Probably but we have the technology in the compiler to iterate over all control flow paths for a proc body. What would be required is some "abstract RC effect summary" for every proc so that the analysis doesn't have to inline every proc. Solving this for the general case seems intractable indeed but the tool could tell you if the program couldn't be proven and you should keep the runtime checks. It's too early to put too much thought into it.
If you care that much about correctness why do you even use ref
to begin with. Ada Spark doesn't support dynamic heap management for good reasons and it works, it produces robust proven software since decades. Once you're into correctness-over-everything you also start to care about out-of-memory situations...
So if I understand correctly what you're proposing are optional annotations that allow the GC to be turned off, is that correct?
Correct, but It's too early to say if they stay optional in the long run or not.
What follows is that libraries need to be explicitly written to support these annotations, right? So we will have two different ways to do things in Nim and end up in situations where libraries are written without a care in the world that they use a GC, and that will mean my GC-free app that uses owned won't be able to use those libraries. Is my assumption correct?
Correct and I propose to not support the GC mode forever for this reason to avoid a permament split. But it's much too early for this decision.
If you care that much about correctness why do you even use ref to begin with
Well, as far as I know, quite a lot of security issues arose as a a consequence of people trying to manage memory manually
Correct and I propose to not support the GC mode forever for this reason to avoid a permament split.
I don't know, maybe you don't care, but I am pretty sure that Nim would lose most of its already small community should this ever happen
Well, as far as I know, quite a lot of security issues arose as a a consequence of people trying to manage memory manually
What has that to do with anything? I don't propose manual memory management.
Your original proposal mentions as a possible disadvantage
Dangling unowned refs cause a program abort and are not detected statically.
Dealing with issues of this kind is in my camp definitely manual memory management. That said, I am going to read the paper you linked to get a more informed impression
I don't know, maybe you don't care, but I am pretty sure that Nim would lose most of its already small community should this ever happen
Mind explaining where you got that idea from? To me nim, with its gc, hits a ceiling when trying to learn more advanced computing concepts, like multithreading. Also when interfacing with C++, there is a lot of headache to make it work properly. Deterministic memory management . yes not manual, is a welcomed change.
@Araq
If you care that much about correctness why do you even use ref to begin with.
I care about rational risk management while using a language I really like. The GC protects me from use-after-free without the risk of aborting in production. Owned refs could do that too, but only if two conditions are met:
I just want to be able to do a cost-benefit-analysis for a technology change. If it's too early for that, I'll just have to wait.
@b3liever Sure, I can explain. I don't know your background. Me, I have used many languages during the years: Scala, Java, Python, Javascript, Factor, Haskell, PHP, Clojure, OCaml and more. These are quite different in many respects, but one thing they have in common is that they are all garbage collected. I probably woud not have considered Nim if it was advertised from the start as a language with a semimanual memory management: having to do this, I would have tried Rust, whose ownership semantics looks much more stable, or gone directly to C++.
During the years I have seen the birth of many Nim libraries, and while I don't have statistics, my impression is that a lot of them make liberal use of the GC. This is entirely unscientific, but it leads to me to thinking that a significant percentage of Nim users comes from GC languages.
Now, for a C++ expert, dealing with memory management at this level might not seems like a big deal. But for someone who has never used C++ seriously, it makes a big difference. Suddenly, it is not clear why one should want to try Nim instead of going directly to performant and stable languages with a larger community. The illusion that Nim is as fast as C but as easy as python suddenly breaks.
The question remains how many of the people involved with Nim come from GC languages. I would bet quite a lot, but maybe we can gather some information from the annual surveys
Incidentally, I haev another doubt. I am not familiar neither with the details of the proposal, nor with Rust, so forgive me if I say somethig wrong. If I recall correctly, Rust introduces the concept of borrowing pointers, allowing one to write functions that take a pointer of unknown owernship when this doesn't matter.
Under this proposal, what this would look like? Would I just write a function taking a ref
and then the compiler would specialize that to a owned ref
when needed?
The recent Nim GC works conservatively, it scans the stack. So aliases kept on the stack are not ref-counted. If an alias is passed to an address "down" in the stack the ref escapes the lifetime of the current "owned ptr". With GC-on, there is no problem, since the GC will "see" this alias when it scans the stack. Without the GC, a "dangling reference" will occur and can only be detected if ref-counting is done and this is potentially costly (ref-counts are added). Moreover, the program will abort now. The recent Nim would continue instead - regarding the aliased ref simply as a returned value that can be used further. With lifetime analysis, Nim could find out that a lifetime escape occurs. Then, the compiler should give an error.
Well I'm out of this discussion for now. If the model fails in practice, we'll notice. Fear mongering doesn't help anybody. We lived for a decade with optional runtime array index checks, we don't see complaints about it and neither do we see these mystical highly reliable servers that can only possilby work with a garbage collector.
So we ... end up in situations where libraries are written without a care in the world that they use a GC, and that will mean my GC-free app that uses owned won't be able to use those libraries. Is my assumption correct?
Correct ...
Can someone help me understand this statement? Is this about an owned-ref-aware app which is compiled without GC support, then to be linked against a binary lib which relies on GC? If it is, I understand why this wouldn't work. But if we are talking about such an app compiled with GC enabled together with a non-owned-ref-aware lib, shouldn't that work if the compiler learns to ignore owned
(and the "niling" or disposing of unowned refs) in this case? The doubly-linked-list example code from the proposal works with GC if owned
is deleted.
Honestly, I don't get the List example completely. Case: A function calls the delete function and passes the List and an unowned ptr named "elem" . The crucial step is the last line in delete: The elem.prev.next reference gets updated, now pointing to the successor of elem. elem itself is now a candidate for deallocation. What happens with elem at the call site then? The caller doesn't know that "elem" is "dangling" . With the recent stack scan, the GC would see the unowned ptr at call site, preventing it from early deallocation. Later, when the ptr gets out of scope, the Node becomes unref'ed and can safely be deallocated. Sorry for my dumb question, no FUD intended.
@Araq would you consider adding a Rc[T]
utility type to the standard library? It occurred to me, that if we compare owned refs to array bounds checks, then a Rc[T]
+ owned ref seems to me to kinda resemble a seq[T]
+ guard mutex, if I squint my eyes. Or, in other words, in case of seq, we can check the len
. Pure owned/unowned refs don't seem to have such a feature; maybe there could be some easy way in the standard library to add it explicitly on demand? But that's just a vague thought, maybe it's not really needed. Actually, now that I think of it, it could maybe be even added as a third party lib if such a need materializes.
@SixteNim Maybe I screwed up this example, check https://researcher.watson.ibm.com/researcher/files/us-bacon/Dingle07Ownership.pdf for more details please.
@akavel I would even support GcRef[T]
if it helps to mitigate fears. The stdlib is fine without it though as far as I've been able to analyse it without tool support.
elem itself is now a candidate for deallocation. What happens with elem at the call site then? The caller doesn't know that "elem" is "dangling"
As far as I understand, elem
is now deallocated. The caller could not have any owning reference to elem
, because only one exists, and that was elem.prev.next
. Hence it could only have non-owning references. It is the task of the programmer to ensure that non-owning references are not used after the owning reference goes out of scope. Hopefully, the programmmer understood that delete
removes the owning reference, and does not use elem
after calling delete(list, elem)
.
The doubly linked list example is in the paper, and it seems to agree with @Araq description. In the paper, I could not find an implementation of delete
, though. I am not sure whether delete
, as implemented in the original post by @Araq, would pass the type checker from the paper.
(Digression: in my opinion, this mechanism is quite brittle, as this example shows. Encoding reasoning about ownership in a sound type system is not trivial - in fact it is hard to tell whether calling a function will destroy a reference without baking this information deep down in the type system, as Rust does. I think everyone agrees that this complicates the type system by a fair margin. The mechanism proposed in the paper seems a best-effort approach to tracking lifetimes, and I am not sure this is something I can count on)
Incidentally, I am not sure why the first mentioned pro of this proposal is
We can effectively use a shared memory heap, safely. Multi threading your code is much easier.
while the paper has a section (6.3 Multithreading) dedicated to discussing the fact that the mechanism will only work single threaded
As described above, the Gel compiler's reference countelimination optimization will work only in single-threaded programs; this is a significant limitation.
So the optimization doesn't work, yes, I know, I am not proposing this particular optimization anywhere.
in fact it is hard to tell whether calling a function will destroy a reference without baking this information deep down in the type system, as Rust does.
Where does Rust do that? It doesn't, it knows about where references are "consumed", that's not the same as destroy.
Hopefully, the programmmer understood that delete removes the owning reference, and does not use elem after calling delete(list, elem).
There is no "hope" involved here, it's detected at runtime if the programmer doesn't understand it. You need to constantly misrepresent the situation in order to have good arguments, have you noticed?
Sorry, I don't want to misrepresent anything. In fact, I did my best to summarize the situation as I understand it in the first two paragaphs, before the parenthetical remark.
it's detected at runtime if the programmer doesn't understand it
Yes, but then I don't understand what happens after detection at runtime. The object is gone, it's not like you can recreate it. The paper states
If a destroyed object has anon-zero reference count, a run-time error occurs and theprogram is terminated; it is the programmer's responsibilityto avoid this condition
Detecting an error and terminating the program may be technically safe, but not really useful
Where does Rust do that? It doesn't, it knows about where references are "consumed", that's not the same as destroy.
I think you are right, my understanding of Rust is very limited. I will comment further on this issue once my understanding of the exact mechanism in place is more firm, to avoid confusion
Owned refs
This is a proposal to introduce a distinction between
ref
andowned ref
in order to control aliasing and make all of Nim play nice with deterministic destruction.The proposal is essentially identical to what has been explored in the Ownership You Can Count On paper.
Owned pointers cannot be duplicated, they can only be moved so they are very much like C++'s
unique_ptr
. When an owned pointer disappears, the memory it refers to is deallocated. Unowned refs are reference counted. When the owned ref disappears it is checked that no danglingref
exists; the reference count must be zero. The reference counting can be enabled with a new runtime switch--checkRefs:on|off
.Nim's
new
returns an owned ref, you can pass an owned ref to either an owned ref or to an unowned ref.owned ref
models the spanning tree of your graph structures and is a useful tool also helping Nim's readability. The creation of cycles is mostly prevented at compile-time.Some examples:
We need to fix this by setting
dangling
tonil
:The explicit assignment of
dangling = nil
is only required if unowned refs outlive theowned ref
they point to. How often this comes up in practice remains to be seen.Detecting the dangling refs at runtime is worse than detecting it at compile-time but it also gives different development pacings: We start with a very expressive, hopefully not overly annoying solution and then we can check a large subset of problems statically with a runtime fallback much like every programming language in existance deals with array index checking.
This is how a doubly linked list looks like under this new model:
EDIT: Removed wrong
proc delete
.Nim has closures which are basically
(functionPointer, environmentRef)
pairs. Soowned
also needs to apply to closures. This is how callbacks can be done:main
is transformed into something like:This seems to work out without any problem if
envParam
is an unowned ref.Pros and Cons
This model has significant advantages:
owned ref
to C'srestrict
'ed pointers.shared_ptr
or Swift's reference counting.owned
keyword to strategic places. The compiler's error messages will guide you.And of course, disadvantages:
owned
annotations.nil
as a possible value forref
stays with us as it is required to disarm dangling pointers.Immutability
This RFC is not about immutability, but once we have a clear notion of ownership in Nim, it can be added rather easily. We can add an opt-in rule like "only the owner should be allowed to mutate the object".
Possible migration period
Your code can either use a switch like
--newruntime
and needs to useowned
annotations or else you keep using Nim like before. The standard library needs to be patched to work in both modes.owned
is ignored if--newruntime
is not active. We can also offer an--owned
switch that enables the owned checks but does use the old runtime.