Open ShadauxCat opened 9 years ago
BTW, destructors are not supported for Dao classes, they may only be defined for wrapped types defined in C.
That answers one question I had then. Any chance they could be added? Destructors prevent programmer error, and I think a language based on a philosophy with key tenets of "naturalness" and "action without effort" should make it easy for developers to avoid errors.
I can provide plenty of examples of cases where destructors prevent errors. First one that comes to mind is a logging class that opens a file handle. If the programmer forgets to call "close" on it and then the reference to the logger goes out of scope, a file handle has been leaked. Do that enough times, you run out of handles and crash. Destructors keep the programmer from having to worry about that because if they forget to call close explicitly, the destructor will clean it up.
(Key point being that memory is not the only resource a class can allocate. They can take file handles, acquire semaphores, open sockets, or even consume software resources, like for example in a game, occupy space in a grid. I think classes need a reliable way to clean that up. I can see them being used much less than in C++, but they're still important. I can't stand working with OOP in JavaScript and the lack of destructors is one reason why.)
There is a whole bunch of questions arising from existence of destructors. Especially the time when they're called has to be precisely specified, but more importantly managed by the programmer. This is very difficult and that's why we use defer(){}
instead for all this non-pure stuff. It forces the programmer to strictly structure the program, so the leaks are prevented (but still possible).
I agree that if the programmers care about when the cleanup happens, then it's reasonable to require them to use something like defer(){}
. But there are lots of cases where it doesn't matter when resources are cleaned up, only that they are. I'd say cleanup happening at an unpredictable time is better than not at all, personally.
I'll use C# as an example. In C# objects are destroyed whenever the garbage collector happens to run. There's no way to predict it. But C# still has destructors and they're still very useful even though the timing can't be predicted.
But there are lots of cases where it doesn't matter when resources are cleaned up, only that they are.
Then it shouldn't be an issue to clean them ASAP, which is a perfect fit for defer(){}
.
Problem with using defer(){}
in that situation is that it can lead to a lot of repeated code.
In the logger case, consider the following two examples, assuming a Logger class that opens a file on construction:
Example 1:
routine Routine1()
{
l = Logger("Routine1.log");
# Do some stuff.
}
routine Routine2()
{
l = Logger("Routine2.log");
# Do some stuff.
}
routine Routine3()
{
l = Logger("Routine3.log");
# Do some stuff.
}
Example 2:
routine Routine1()
{
l = Logger("Routine1.log");
defer()
{
l.Close();
}
# Do some stuff.
}
routine Routine2()
{
l = Logger("Routine2.log");
defer()
{
l.Close();
}
# Do some stuff.
}
routine Routine3()
{
l = Logger("Routine3.log");
defer()
{
l.Close();
}
# Do some stuff.
}
The first example uses destructors. The second uses defer(){}
. defer(){}
is a GREAT concept and I love it for situations where I need to do some specific handling only once or a few times, but without destructors, requiring its use is pushing the burden of cleaning up resources on the client software, and leading to potentially a lot of copy-pasted code that could become a maintenance issue. With destructors, the burden of cleanup is placed on the module developer, cleanup only has to be maintained in one place, and the client code ends up being both simpler and cleaner.
As an even more complicated example:
Suppose I write a class that contains a Logger as a member. The logger is private and no one knows it's there. My class has nothing to do with logging, it just logs debugging output, so the person working with it isn't acutely aware of the fact that my class even has a Close()
function on it. In this case, it's highly probable that a programmer won't even think about the need for a defer(){}
block for this class and will just use it and let it go out of scope, and won't notice that they're leaking file descriptors until their code goes live and crashes after running for three weeks. Then they have to go back and add defer() blocks in every place where they use my class, or close it manually in cases where defer() blocks won't work because it's a resource shared by multiple functions. Maybe THEY'VE even created a class that contains MY class so now they have to add a Close()
method to their class to clean up my class so my class can clean up the logger class.
It can keep getting more and more complicated and add up to days of work to fix the problem.
If Logger has a destructor that's called automatically when it's cleaned up, none of these other classes even have to think about it. It happens automatically when their class goes out of scope and logger no longer has any references to it.
Why don't you want to use (class) decorators for such repetitive patterns?
I'm not certain how a class decorator would solve the problem. If it forces a defer() block that destructs the class, that would break cases where the class needs to stick around (i.e., member of another class). If it's a function decorator, that still puts the burden of remembering to use that decorator on the client code.
Let me counter with this: Why do you want to not have destructors? They exist in almost every object-oriented language I've ever worked with, from the simple single-threaded (python) to the multi-threaded (C#) to the potentially massively multi-threaded (C++). Javascript doesn't count because it's only barely object oriented - OOP in Javascript is like hammering in a nail with the handle of a screwdriver.
I understand that I've come in and opened several tasks within a short period of time and maybe that makes you feel like I'm asking for the moon. But I think I've also shown I'm willing to do the work. When I open issues, more than anything, I'm just looking for approval of the idea; once it's approved I'm willing to write it myself. So I'm not trying to drop a ton of work on your shoulders.
But I do keep getting a lot of responses from both of you to the tune of "I haven't needed it, so it must not be necessary."
Just to give a little background on who I am and why I'm asking for these things that may seem insignificant for day-to-day use - I'm a AAA game developer. I've worked on very large codebases (Mass Effect, Star Wars: The Old Republic). I've seen projects get HUGE.
I've seen how small variances in memory layout (like using a map to bool instead of a set) can ruin cache coherency and destroy performance. I've seen how those extra bools can cause a game to blow out the limited memory available on console platforms, or cause serious memory fragmentation that results in being unable to allocate.
I've seen maintainability issues like not having destructors or finalizers completely destroy a project.
I'm not trying to be contrary or hostile here (and I really hope this isn't taken that way), but like I said, I'm getting a lot of responses from both you and @Night-walker saying that things are not necessary because you've never needed them. But as I've said before... just because you don't need it doesn't mean no one does. If I didn't need it - if I hadn't needed and used it countless times in the past - I wouldn't ask for it.
Believe me when I say that I'm not trying to spit all over Dao. Truth is, I love Dao. I want to use it in a game project I'm working on. I want to use it as a complete replacement for python for my personal use. But some of these issues I've filed will prevent me from doing either. I'm filing these issues because I want Dao to succeed and I want Dao to be a language I can take to other game developers on future projects and say "hey, we should check this language out." But I know without some features (like destructors), using Dao for a huge project like a game will be a non-starter for them.
I really am trying to help. I promise. Both by providing feedback from my experiences using the language, and by actually contributing code. I apologize if I've come on too fast and too strong, but I really do want to help, not hurt.
Point of semantics; I'm using "destructor" and "finalizer" interchangeably here when they are actually different things in some ways. Destructors are preferable to finalizers for a number of reasons but finalizers are still better than nothing.
But I do keep getting a lot of responses from both of you to the tune of "I haven't needed it, so it must not be necessary."
A simple principle: deem any feature useless unless the opposite is proved. When you say "It's good, I use it, other folks use it", it's not sufficient. So I counter it with "And I don't think so", prompting you to come up with better arguments. If you propose something, be ready to back your idea with solid reasoning -- it's often more important then the implementation itself.
If you feel we have inert attitude toward your idea, you aren't trying good enough to make us interested :)
Just to give a little background on who I am and why I'm asking for these things that may seem insignificant for day-to-day use - I'm a AAA game developer. I've worked on very large codebases (Mass Effect, Star Wars: The Old Republic). I've seen projects get HUGE.
Well, now I am interested. I play those AAA games :) That's it, you have found my soft spot :)
Ok, back to the point. Destructors. So far, there was no use for them in classes -- all the resources like file descriptors are acquired and released in wrapped types which actually do have destructors. As @dumblob pointed out, destructors also lead to certain complications in the control flow which can lead to various unpleasant effects.
Again, it doesn't mean I consider destructors useless. But they require additional considerations worth a dedicated issue, with proper evaluation of all pros and cons, as well as alternatives.
A simple principle: deem any feature useless unless the opposite is proved. When you say "It's good, I use it, other folks use it", it's not sufficient. So I counter it with "And I don't think so", prompting you to come up with better arguments. If you propose something, be ready to back your idea with solid reasoning -- it's often more important then the implementation itself.
If you feel we have inert attitude toward your idea, you aren't trying good enough to make us interested :)
I can understand that principle in general. That's why I suggested #381 - I felt I wasn't doing a good job of presenting my feature requests and that a more rigid proposal format would improve that for myself and others.
Ok, back to the point. Destructors. So far, there was no use for them in classes -- all the resources like file descriptors are acquired and released in wrapped types which actually do have destructors
By wrapped types, I assume you mean exposed C types.
As @dumblob pointed out, destructors also lead to certain complications in the control flow which can lead to various unpleasant effects.
Again, it doesn't mean I consider destructors useless. But they require additional considerations worth a dedicated issue, with proper evaluation of all pros and cons, as well as alternatives.
Sure, that makes sense, I'll open a separate issue later. I honestly don't expect destructors to be used much in Dao - they're generally not needed in garbage collected languages. But when they're needed, they're usually really needed.
By wrapped types, I assume you mean exposed C types.
I mean non-core types which get into the language via DaoNamespace_WrapType()
, like io::Stream
or mt::Future
.
I honestly don't expect destructors to be used much in Dao - they're generally not needed in garbage collected languages. But when they're needed, they're usually really needed.
I agree that such need may eventually arise. Some tool to address such cases may be needed, be it destructor or something a bit different (e.g., as in Ruby).
I've not used Ruby enough to say much about it, but a brief google search indicates that it basically has the same thing, just with a different interface. Don't like their interface much, but that's subjective, and I can see the merit of being able to define that finalizer outside the object if you want to tie its destruction to some other behavior.
A note for @ShadauxCat, to clarify things. Me and @dumblob do not make decisions regarding the language design. Only @daokoder, as the author of the language, bears responsibility for determining the path Dao takes. So if you have feeling that instead of doing some work on your proposal we just mumble a few comments and subside, it's only because we're not in charge to do that.
When it comes to the standard library, you can turn to me directly regarding the modules I wrote myself or actively contributed to.
All things said. @Night-walker precisely described our perception and the situation. Now we have to wait.
A remark about destructors. There is one dangerous case to take into attention in the context of GC languages: when destructor may contain something like global_var = self
.
How Go solves this: SetFinalizer().
Go and Ruby seem to take the same approach. The way python does it starting with 3.4 (just as another point to look at) is that you define a method named __del__()
and that gets called once and only once on an object; if a reference is re-obtained during the finalizer, the finalizer won't get called again later. (Full spec: https://www.python.org/dev/peps/pep-0442/ )
I like the method Go and Ruby take better, personally. It allows for objects that are revived during the finalizer to also set a new finalizer if desired.
I like the method Go and Ruby take better, personally. It allows for objects that are revived during the finalizer to also set a new finalizer if desired.
I also find it promising. It allows to dynamically add an ad-hoc finalizer to any object.
It allows to dynamically add an ad-hoc finalizer to any object.
That's also a nice bonus.
What happens if the object already has a finalizer and someone adds an ad-hoc one? Does it replace the existing one or call both?
What happens if the object already has a finalizer and someone adds an ad-hoc one? Does it replace the existing one or call both?
It replaces the former finalizer.
Seems dangerous to add them ad-hoc, then. Seems there should also be a GetFinalizer() so you can call any existing ones as well if you're adding them ad-hoc.
How does it work with inheritance? If my base class has a finalizer and the inherited class also needs one, does the inherited finalizer just need to manually call the base one since it's getting blown away?
Seems dangerous to add them ad-hoc, then.
Making some code run implicitly at an non-determined point of time is dangerous by itself. But static finalizers would be more predictable, of course.
How does it work with inheritance? If my base class has a finalizer and the inherited class also needs one, does the inherited finalizer just need to manually call the base one since it's getting blown away?
It has no connection to inheritance, and not only because Go actually doesn't support inheritance :) Finalizers work on abstract references examined by the GC. They don't mess with language syntax in general and OOP in particular, which makes them more attractive than classic destructors.
This is the scenario where I see inheritance mattering:
class A
{
routine A()
{
# ... Acquire some resources ...
SetFinalizer(self, Finalizer);
}
routine Finalizer()
{
# ... free up the resources ...
}
}
class B : A
{
routine B() : A()
{
# ... Acquire some different resources ...
SetFinalizer(self, Finalizer); # Uh-oh! A can't finalize now!
}
routine Finalizer()
{
# ... free up the different resources ...
}
}
In this situation, constructing an object of type B results in only B's resources being freed, and A's resources end up leaking. My suggested solution:
class B : A
{
routine B() : A()
{
# ... Acquire some different resources ...
parentFinalizer = GetFinalizer(self);
SetFinalizer(self, Finalizer); # Uh-oh! A can't finalize now!
}
routine Finalizer()
{
# ... free up the different resources ...
if(parentFinalizer)
{
parentFinalizer();
}
}
var parentFinalizer : routine;
}
Destructors implemented in the manner of C++/Python/C#/Java don't have this problem since they always call all the destructors in a tree, but they don't have the benefits of the nice handling of revived objects and the ability to set them ad-hoc. Go doesn't need to worry about this since they don't have inheritance, but since Dao does, it seems worth at least considering.
Destructors implemented in the manner of C++/Python/C#/Java don't have this problem since they always call all the destructors in a tree, but they don't have the benefits of the nice handling of revived objects and the ability to set them ad-hoc. Go doesn't need to worry about this since they don't have inheritance, but since Dao does, it seems worth at least considering.
It doesn't matter much, actually. Go/Ruby-styled solution can achieve the same kind of behavior by stacking finalizers.
Another problem with any kind of finalizers is that they need to be executed in a separate, dedicated thread. Not the thread in which the GC is running, as they can possibly freeze it.
Even more problems (see the second answer).
The more I read about finalizers in GC languages, the less I like the idea of having them in Dao. For freeing OS resources, there are destructors in wrapped types, so there is no problem with clean-up. As of now, I don't see cases where finalizers are so indispensable that their existence becomes justified.
It really seems, that the only type of cases, where finalizers are useful is a self-defense of last resort (doing what the parent object should have done or logging after meta-inspection in extremely buggy software etc.). In my opinion, this is far from being convincing to support them.
It doesn't matter much, actually. Go/Ruby-styled solution can achieve the same kind of behavior by stacking finalizers.
How do you stack them if calling setFinalizer() deletes the previous one?
Another problem with any kind of finalizers is that they need to be executed in a separate, dedicated thread. Not the thread in which the GC is running, as they can possibly freeze it.
Even more problems (see the second answer).
Most of the languages with problematic finalizers (as far as I know) have problems because they get executed at unpredictable times by the garbage collector, and they have that problem because their garbage collection algorithm is based on reachability analysis rather than reference counts. Dao uses reference counts, so it at least seems simple (though it may not actually be simple; you guys know more about this than me so I'll trust you if you say it's not) to execute a finalizer method at the moment the ref count reaches zero, on the thread where it reaches zero.
I'd then personally be ok with saying, if your object has circular references and gets cleaned up by the garbage collector, finalizer doesn't run. If your object is alive when the program exits, finalizer doesn't run. This is a fair compromise to me; in well-designed code finalizers run at predictable times on predictable threads and resources are freed immediately rather than eventually.
PHP uses a similar method (except that they don't have a traditional garbage collector, and they do execute destructors at shutdown) so it seems feasible. This would be an implementation of destructors, though, rather than finalizers. Destructors in general are far more useful and less buggy when they're possible.
For freeing OS resources, there are destructors in wrapped types, so there is no problem with clean-up.
Is there a way to create a wrapped type from script? If someone needs this functionality and the solution is "implement it in C," that's not really a solution - they're using a scripting language instead of C for a reason (whether that's cross-platform compatibility or quick iteration or what have you).
I will say, though, that a majority of cases I want to use destructors in are handled by code section methods. But to me, the strongest argument in favor of having them (and maybe you guys disagree that this is a strong argument, but it's the reason I've been bringing it up) is that I can't name any object oriented language that I know of that doesn't have them. Logic tells me, if they weren't needed in some capacity, some language in popular use would have gotten rid of them already.
The minority of cases where I want to use destructors that can't be handled by code section methods are all to do with interface design. I have a thing for good interface design; whatever the implementation details are, I always want to have a strong interface design, and to me a good interface design is one that gives its users the fewest ways possible to make errors. Requiring a close()
or cleanup()
function to me is bad interface design because that's an opportunity for user error. This is the main reason I want destructors; the fewer opportunities Dao has for users to make errors, the easier it will be for users to iterate quickly and make bug-free software, and that means more people use Dao.
I can't comment on whether your argument is strong or not, but I can contradict the following:
Logic tells me, if they weren't needed in some capacity, some language in popular use would have gotten rid of them already.
as you can't get rid of anything of such significance (from the implementation point of view) in a language once you introduced it (existing code/libraries/production_systems/etc. and backward compatibility are the reasons). That means any wrong decision at the beginning will roughly control the future path of the language. If we omit destructors/finalizers, we can add them in 5 years if there is an utter need for them. Currently the main goal is to not close gates to possible future features and keep the language as small as feasible.
I misworded my statement.
If destructors and finalizers were not necessary, I believe someone at some point while designing a language would have said "hey, these aren't useful, let's not put them in here." I don't mean to say that they would have gone back and removed them after the fact.
And it's possible someone did say that at some point. But if they did, evidence shows that no one adopted their language.
Well, all the languages we're talking about (and "comparing" Dao to) were designed (more than) a decade ago when it wasn't yet much clear what is or is not a good programming practice. I was actually quite surprised, that Go implements finalizers, so I did a bit of a search on how it was in Limbo (an old predecessor of Go) in Inferno and found out, that finalizers were present there as well - used for a very few specific cases (which nobody cited and I didn't have time to dive into Inferno code myself).
But the lesson learned is that recently creaters of Go consider their removal in Go2 - see the discussion Deprecating/removing runtime.SetFinalizer? and corresponding github issue https://github.com/golang/go/issues/7697 .
In Go 1.x, there are two use cases for finalizers - interfacing with C code and in os.File
. In Dao, the C interfacing is not relevant and introducing finalizers with all their significant disadvantages only due to some debugging regarding open fd's is a no go.
Btw I'm curious, if Go2 guys will come up with some lean solution for resource tracking and releasing when interfacing with C as that seems to me (subjectively) as the only obstacle to get rid of them. One option is addition of bundled/prepared interfaces/packages for custom reference counting.
Well, all the languages we're talking about (and "comparing" Dao to) were designed (more than) a decade ago when it wasn't yet much clear what is or is not a good programming practice.
Worth mentioning that PHP didn't have destructors until PHP 5 and then felt the need to add them in 2004, almost 10 years after the language was created, so clearly destructors filled a need that was not being filled by other means.
I'm not going to argue if you guys really don't want to implement it, but I also do think that if Dao becomes popular, you'll see a lot more people than me asking for this feature.
Worth mentioning that PHP didn't have...
As @Night-walker somewhere pointed out, PHP is unfortunately not a good example - see PHP: a fractal of bad design (it's though not related to finalizers, but might show, that "what people want" is not necessarily advantageous for them). Keep in mind also, that PHP doesn't have defer
, which makes the position of finalizers stronger.
Regarding implementation, the final decision is upon @daokoder.
Worth mentioning that PHP didn't have destructors until PHP 5 and then felt the need to add them in 2004, almost 10 years after the language was created, so clearly destructors filled a need that was not being filled by other means.
Please, don't cite PHP. It was written by a guy who actually hates programming :)
It doesn't matter much, actually. Go/Ruby-styled solution can achieve the same kind of behavior by stacking finalizers. How do you stack them if calling setFinalizer() deletes the previous one?
If we're talking about our own possible implementation of finalizers, nothings forbids us to make them stack.
Most of the languages with problematic finalizers (as far as I know) have problems because they get executed at unpredictable times by the garbage collector, and they have that problem because their garbage collection algorithm is based on reachability analysis rather than reference counts. Dao uses reference counts, so it at least seems simple (though it may not actually be simple; you guys know more about this than me so I'll trust you if you say it's not) to execute a finalizer method at the moment the ref count reaches zero, on the thread where it reaches zero.
If Dao used ARC (if that is what you imply), it would not need a GC (like Swift). Apparently, there is GC, it runs concurrently and, at present, without any execution context of its own (DaoProcess
).
Thus the problem is not only in the fact that finalizers in GC languages are called in non-determined point of time. That's the least of the issues, actually. More significant ones are:
I'd then personally be ok with saying, if your object has circular references and gets cleaned up by the garbage collector, finalizer doesn't run. If your object is alive when the program exits, finalizer doesn't run.
Then finalilzers become essentially a matter of possibility, which prompts to obviate the use of any essential logic within them.
This is a fair compromise to me; in well-designed code finalizers run at predictable times on predictable threads and resources are freed immediately rather than eventually.
We cannot rely that all the code written in Dao is going to be well-designed. Instead, we should shape the language in such way that would reduce the possibility for bad-designed code to appear. If finalizers can be misused, they will be misused earlier or later. With tragic consequences.
Of course, if it's possible to make finalizers behave predictably like destructors in C++, most problems vanish. But I don't see premises for it at the present moment.
I may be under impression of negatively inclined articles about finalizers I've been reading recently. But it's worth mentioning that the most notable opinion expressed toward them in all the GC languages I've inspected so far is "don't use them".
Nevertheless, I think it is early to jump into conclusions, particularly taking into account that many arguments in our discussion are currently based on assumptions.
If Dao used ARC (if that is what you imply), it would not need a GC (like Swift)
If it doesn't, why are there C functions for incrementing and decrementing ref counts? (Not saying this as an argument, just seeking understanding.) My assumption was that the GC existed for solving the problems ref counts can't, like cyclic references. If the ref counts aren't used or aren't reliable, then my view of this situation changes greatly; my thought was that with ref counts destructors could be implemented in a reliable, predictable way (as I said), which would be both simple to implement and very useful.
We cannot rely that all the code written in Dao is going to be well-designed. Instead, we should shape the language in such way that would reduce the possibility for bad-designed code to appear.
This is the same argument I would use as a reason TO have destructors (not finalizers, destructors). Destructors make it harder to design code poorly.
If it doesn't, why are there C functions for incrementing and decrementing ref counts? (Not saying this as an argument, just seeking understanding.) My assumption was that the GC existed for solving the problems ref counts can't, like cyclic references.
Maybe it is indeed so. No way to be certain without @daokoder.
This is the same argument I would use as a reason TO have destructors (not finalizers, destructors). Destructors make it harder to design code poorly.
Destructors (C++, Rust) -- yes, finalizers (Java, C#, Go, Python, Ruby, ...) -- no.
Even if destructors can be implemented in predictable manner, there is still a problem with the API. Executing destructors in appropriate threads means that DaoGC_DecRC()
should take DaoProcess
as an additional parameter to be able to execute them, which in turn drags DaoProcess
in all functions which may result in calling DaoGC_DecRC()
, and etc. That's a significant bloating of the interface.
BTW, Ruby doesn't have real finalizers/destructors/whatever. Function which can be registered to run at object's freeing cannot reference it. Similar behavior can be achieved in Dao even without modifying the language.
Sorry that I missed so much discussion. The past weeks have been busy for me (job searching and Chinese New Year activities etc.). Now almost over.
Ok, back to the point. Destructors. So far, there was no use for them in classes -- all the resources like file descriptors are acquired and released in wrapped types which actually do have destructors. As @dumblob pointed out, destructors also lead to certain complications in the control flow which can lead to various unpleasant effects.
That is exactly the point I wanted to point out. All non-memory resources have to be presented in Dao as wrapped C/C++ types, which support destructors, so there is no problem of freeing them automatically if they are wrapped properly.
In principle, destructors can be supported for Dao class, but their invocation would be unpredictable. And it would surely complicate GC a lot. I don't think that the benefits of class destructors can justify the complications.
Just now, I think I may have found a solution for this without changing anything about Dao and DaoVM. The idea is to define/wrap a base type in C, then any Dao class can be derived from this C type and has one of its special methods (call it destructor) called by the destructor of the C type!
I haven't think about it thoroughly, so I am not sure what kind of caveats it might have (I will need to check if it will mess with the GC).
My assumption was that the GC existed for solving the problems ref counts can't, like cyclic references.
Maybe it is indeed so. No way to be certain without @daokoder.
Exactly so.
Even if destructors can be implemented in predictable manner, there is still a problem with the API. Executing destructors in appropriate threads means that DaoGC_DecRC() should take DaoProcess as an additional parameter to be able to execute them, which in turn drags DaoProcess in all functions which may result in calling DaoGC_DecRC(), and etc. That's a significant bloating of the interface.
I don't think it would have to be done that way. A function could be created, GetActiveProcess()
, that returns a DaoProcess
pointer stored in thread-local storage. When a process begins, it sets that thread-local variable to itself, and when it ends, it sets that thread-local variable to NULL. Unless it's possible for one process to execute another process within itself (I don't know if it is or not), this should be straight-forward; and if that is possible, the thread-local variable would then just become a DaoList
and GetActiveProcess()
would then return the pointer at the end of that list. (A stack, if you will.)
Just now, I think I may have found a solution for this without changing anything about Dao and DaoVM. The idea is to define/wrap a base type in C, then any Dao class can be derived from this C type and has one of its special methods (call it destructor) called by the destructor of the C type!
I'm sorry if this comes off as contrary, but I don't personally think this works as a general-purpose solution. It may work fine for my usage, since I'm embedding Dao within C++ code, but writing native code isn't always an option for everyone. But since the GC does mostly just clean up cyclic references, it should be possible to make destructors predictable for 99% of cases by just calling them when ref count reaches 0. For the remaining 1%, people who need destructors are generally advanced enough that if you tell them "don't create cyclic references in classes with destructors," they can follow that instruction.
By the way, @daokoder, since I've got your ear for the moment, I do want to take a moment to say I think Dao overall is a pretty fantastic language. I hope you don't take my feedback as too negative. I've been playing with it near-daily and there's a lot of things I really like about it. So I hope you don't take my feedback as criticism.
but writing native code isn't always an option for everyone.
No need to write native code by users. Such C base type can be provided by a standard module, and user classes just need to derive from this C type and implement a destructor for the class. So it is a general-purpose solution.
By the way, @daokoder, since I've got your ear for the moment, I do want to take a moment to say I think Dao overall is a pretty fantastic language.
Thank you.
I hope you don't take my feedback as too negative. I've been playing with it near-daily and there's a lot of things I really like about it. So I hope you don't take my feedback as criticism.
Of course not, I didn't respond promptly because I really didn't have sufficient time (for the reasons I mentioned in one of my previous posts). I actually appreciate your feedback very much.
No need to write native code by users. Such C base type can be provided by a standard module, and user classes just need to derive from this C type and implement a destructor for the class. So it is a general-purpose solution.
If it's a standard library base class, I suppose that could work. It feels a little unusual to me to have to inherit from something to get a destructor, but it's functional, so I won't argue that point terribly much.
When to destructors get called for native types? Are they called at predictable times (i.e., when ref count reaches 0) or are they called whenever the GC gets to them?
Playing around seeing if I could figure out whether or not Dao has destructors, I made a class that looks like this:
This works completely fine! Obviously since this does not seem to be the way to define destructors, it doesn't seem to do anything, but it doesn't throw an error.
Until I do this:
Then it throws a fit.
This is really kind of a minor issue, but as a polish item, it seems like this is something the compiler should throw an error about at the time an invalid method is defined rather than when it's called.