Closed ncannasse closed 8 years ago
Nicolas maybe you addressed my main point, or missed it? So I will clarify: How can I swap
var window : Dynamic = js.Browser.window;
var rqf : Dynamic = window.requestAnimationFrame ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame;
rqf(run);
For this code
var s = Browser.document.createStyleElement();
s.innerHTML = "@keyframes spin { from { transform:rotate( 0deg ); } to { transform:rotate( 360deg ); } }";
Browser.document.getElementsByTagName("head")[0].appendChild( s );
(cast s).animation = "spin 1s linear infinite";
loop( 60 );
If I found that CSS3 loops were running faster, or even decide based on browser or device?
mmm I am trying to work out if they are doing the same or if there are differences in how they would work and in resources used, mine is 60fps, but I don't know what frequency the first one runs at would it be called lots more and result in more resource because tick is going to get called lots even if only decides internally to do stuff every so often, plus you have all that extra Timer stamp code run each time? I am not sure if my point is good or bad but I suspect it highlights the inflexibility?
So I would call tick myself, but then the saving of the CSS3 doing the 60fps calculations in hardware is wasted?
Minor impl note: from HaxeQuake I remember having problems with RAF when tab is inactive or something, so I sticked to setInterval
for that. I guess RAF is designed for rendering, not for general event loop systems.
This seems reasonable - you still need some way of waking the framework up from a deep sleep. Also. I think the js reference implementation should not request an animation frame until there is an event to run - we must be very conscious of allowing a zero-cpu solution. You may also like to use a js timer in the case where the next wake is more than say 1 second away - since the library code only needs to get written once, may as well make it as efficient as possible. Lists vs array is an implementation detail, and I do not care either way as long as it works well. If you have a struct rather than an interface (as you do) it means you can change implementations without anyone caring, which is good. Similarly, combining timers vs not is just a detail? If haxe.Timer extends MainLoopEvent, you may be able to keep the same Api (new Timer().run). Although, this may break frameworks that already override Timer and do this kind of stuff themselves. I guess you would also handle threads by pushing an event, and then poking a wake event? So you would still have "MainLoop.runOnMainThread(Void->Void)", it would just create a struct and push it on the list. Some kind of "runOnce" to auto-remove it from the list when run.
Sven, the next wake time is just a suggestion - how the framework combines this with its frame-rate (eg, only poll the main loop a 60hz) is up to it - as is the case with js request animation frame.
I do not like the name "tick", mainly because it makes me feel like you should call it a 60Hz, but I could live with it.
@waneck, I would like to see http (and sockets in general) implemented with a single thread blocking in a multiple select call on platforms that support it. My
We did this already: https://github.com/TiVo/activity ...
... and presented at WWX: https://github.com/kulick/wwx2015/blob/master/WWX2015_ActivityHaxelib.pdf
Feel free to use or not use as much of that as you want. I would love for the program's "main" to be the first "activity" started up by the generated code automatically, with no need to call Activity.run(). Then programs that never care about using concurrent programming can look exactly like they do now, but anyone can create more activities from main if they want too.
I see the activity library as a client of the MainLoop, not the other way around. This allows win32 apps to sit in "GetMessage/MsgWaitForMultipleObjects" and android apps to wait for java events from the view. The key api being something like:
// For frameworks
static function setAsyncWakeCallback(Void->Void);
static function runEventsAndGetSleepTime():Float;
// for clients
static function asyncWake();
static function addEvent(event:MainLoopEvent); // runAndGetSleepTime + remove/cancel
static function changeLockCount(inDelta:Int); // Maybe add a dummy event instead
So, as a client of the MainLoop, the activity library would add a single event to the main loop and do all its soon/later/running inside a runAndGetSleepTime, return how long it wants before being called again. If it spins up a thread to sit an a message queue, it can call asyncWake when it gets something.
We then have one of several frameworks - command line/ browser/nme/openfl/waxe which use the first two functions where appropriate. Doing it this way extends the lovely activity async socket code to all frameworks that want to implement the 2 simple api functions. So I guess the question is, can you work with just runAndGetSleepTime and asyncWake?
@Justinfront I think requestAnimationFrame is the standard for JS, which don't really need more than 60FPS refresh rate anyway. We can still swap the implementation but I think that's the best for now.
I'll try to update my MainLoop proposal with code regarding thread handling and concurrent access, and have a working Neko implementation. Is there something else to be done?
Putting that forward to 3.3.0-RC1, still waiting for @jgranick input on the topic
I'd like to add some words, mostly influenced by Qt and libuv design. I'll be verbose, and start from beginning, since I feel lack of consensus.
There's an event loop, as a whole, and there can be many of them. Each loop have some methods, most important is run
that blocks until there's no more possible events pending, or until stopped, stop
to make loop exit at next tick, and post
to add event. By possible I mean, that there's no thing to wait events from, even if there's no pending events now.
Each thread can run different event loop, and nested loop are prohibited (maybe it should not be, but I see no point in blocking call to nested loop run
, instead of posting same events on your loop). Communication with loop from other threads should be message queues. If one have rendering and network IO, one may want to split that in separate threads, posting events to each other.
There's a default event loop, for flash / js in browser / node.js / etc. it's native loop, for other platforms it can be static initialized in library.
There's event sources and event listeners. Event source is not an object in OOP sense, it's more implicit thing, like "event loop is idle", "new frame is requested" or "socket have new data to read". While code that push events to loop can be external, some of that sources should be considered in implementation. Binding listeners to event is completely external to loop, it cares only about receiving event with callback and calling it.
There are different kinds of events, that may be handled differently by loop.
Idle events should run on next loop tick, keeping CPU hot, preventing loop from breaking but still allowing to do stuff async.
Timer events should run after specified time passed, so other activity may interfere with that.
IO, draw or any other underlying system facility - depend on that system much. I.e. epoll_wait
may accept timeout, and block for limited amount of time.
If you have nothing to do, but IO, you could wait some time. If you have pending timer, than you should not wait more than that. If there's idle callbacks, or other stuff, you should only check for pending events, setting timeout to 0. If there's only timers, no need to waste CPU, let it (nano)sleep. PostEvent should, of course, wake loop.
All of that is run
implementation details, that seem important to me. Some of this details may be pushed away, i.e. to different thread, like sockets polling thread, that blocks on epoll and post each socket event back to loop thread, or timer thread, that just sleeps right amount of time, wakes on new timer, and posts events back. In that case loop implementations may be pretty short, wait(could be busy wait on spinlock, could be condition variable) on concurrent queue, run callback, loop. I don't know how that will affect performance.
Similar idea can be used with flash/js, but instead of thread one will listen on some native events, and post them to implemented loop, and from there they will be called. So implemented loop will act as a hub, but maybe that's useless overhead.
API would be like TimerEventSource.delay(eventLoop, callback, delayAmount);
will push new timer to timer thread, and later from that thread new event will be posted to said loop. Or socket.readable(eventLoop, callback);
will bind callback to state in io thread, and on state change thread will post event. eventLoop
argument could be optional, default loop will be used if none passed. It allows both custom object tagging in event source implementations, and common event loop api.
With all that in mind, I think that Haxe could provide (in stdlib or not) API for event loop and test suite to check implementation behavior. Maybe default almost-noop implementations for platforms that have own loop, and even default proper implementation for ones that does not. API should cover immediate events - there's no need to wait, i.e. for animation frame just to make async API async, like in then
on already resloved Promise. API should cover timer events, so possible implementations could not waste CPU in some cases. API may cover other events, and that's too abstract, but I don't see a clear way to put RAF, XHR, WM_PAINT, epoll, IOCP and all other event sources in single cross-platform API.
That's much like @back2dos Scheduler
proposal, but covers more different types of callbacks, that may influence efficient loop implementation.
Regarding examples:
I see @ncannasse one more concerned on implementation, while what library-framework developer want to make library-loop separation is interface only. @hughsando is more API related, but getNextWake
is clearly tied to implementation, since there's no such thing in browser. Also Scheduler
is all static, and so only one event loop is allowed per process, but that's easy to fix.
Lime does not use Haxe callbacks in its main loop, in order to get smoother frame times all of the timing is handled on the native side.
I am struggling to understand the use-case?
Of the platforms that Lime supports, Emscripten, Flash and HTML5 are the most "exotic" when it comes to a main loop, but on none of them does Lime initiate a loop immediately.
If there is a favorable reason for using the activity
haxelib, for example, I would be happy to support an #if activity
conditional in the setup code to work more favorably, similar to what we do for Munit.
Is the desire to provide a main loop implementation for Haxe projects that do not use a framework? Is it to add code before a main loop is started? Is it to get update/tick events during the loop? Is it to intercept events? Perhaps if I understand the use-case better
It is important not to choose "the best" way of doing an generic event loop (post-to-multi-queue, activity-lib) but provide a means by when ANY event loop style can interact with ANY framework. So there are 2 parts - 1. what does a framework need to do support generic async-ish libraries, and 2. what does an async library need to do to be supported by a framework. To be explicit, i would say: frameworks: nme, lime, waxe, qt, sdl-based (snow, custom), flash, js, native win32, command-line* async-ish libraries (MainLoopClient): timer, http, socket io, audio player, libuv, activity, file system watcher, MainLoopThread
Now I think it is important to split command-line into the framework class, and let the main loop library provide an optional default implementation here. But this is different - the other frameworks will not want to use any code in this class. I think is is where some confusion with the "MainLoop" comes from. There is "LibMainLoop", which is what I want to talk about, and is for "integrating with the main loop" and "CommandLineLoop" which is just one implementation which will not be used at all by nme.
Anything that does not involve the main thread can be handled differently - does not need and framework support, so we should ignore it for now. Although, I will add that some thread activity may keep the command-line framework alive, but adding some kind of dummy client seemed like a good idea.
LibMainLoop should not block at any time - you will mess with the frameworks timing.
So what does a framework need to do to support an asyc -library?
What does a MainLoopClient need to do?
The command-line framework would block until there are no more clients, and the implementation details can be separate.
I would see something like 7 public functions corresponding to these 7 tasks in the API.
So, to answer Joshua's question its a way for any MainLoopClient to interact with any framework. Lets take haxe.http as an example. This class would use a MainLoopClient to requests a framework callback to dispatch its data/error info. This will then work commandline, nme, waxe, lime, ... Same goes for a hypothetical file-system-watcher class. This does not need to be tied to lime, it could live inside the commandline framework as part of, say, a webserver that also receives http requests.
I think you will agree this is useful. And again to re-iterate, this LibMainLoop should not enter any "while(true){ }" loop, except in the command-line framework class, otherwise it is unacceptable for nme and waxe and I will not use it.
I do not think the 3 framework functions i have proposed are too hard. 1. is just a generalization of the timer class. 2. (async callback) is doable and desirable, if not already done. 3 could probably be ignored if you want. The client functions are probably more for client writers to discuss, since frameworks do not need to know about them. But some kind of set of MainLoopClient classes would seem a logical implementation.
Hugh
On Sat, Apr 2, 2016 at 9:18 AM, Joshua Granick notifications@github.com wrote:
Lime does not use Haxe callbacks in its main loop, in order to get smoother frame times all of the timing is handled on the native side.
I am struggling to understand the use-case?
Of the platforms that Lime supports, Emscripten, Flash and HTML5 are the most "exotic" when it comes to a main loop, but on none of them does Lime initiate a loop immediately.
If there is a favorable reason for using the activity haxelib, for example, I would be happy to support an #if activity conditional in the setup code to work more favorably, similar to what we do for Munit.
Is the desire to provide a main loop implementation for Haxe projects that do not use a framework? Is it to add code before a main loop is started? Is it to get update/tick events during the loop? Is it to intercept events? Perhaps if I understand the use-case better
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-204623151
Thinking of Haxe core classes, it would be nice to be able to be able to somehow provide Haxe with the current milliseconds and to have a callback to enable that code to work.
Other than the milliseconds, this seems similar to the requestAnimationFrame
API, it assumes no knowledge of the underlying implementation, but allows that "heartbeat".
Is this heartbeat enough for most uses?
EDIT:
At this level, it seems more like a "sys framework" implementation, such as getting an update callback (as above), or perhaps for handling operations, such as opening for files. Android would benefit if we could hook in our own file I/O without overriding all the core classes.
I think "heartbeat" necessarily involves continuous use of cpu, and we definitely need the api to allows for a "zero cpu" solution. That is not to say that a framework may not instead choose to poll on a heartbeat - eg, the js implementation may simply poll on requestAnimationFrame, and that is ok. In fact, since there is no owner of the js loop, a js framework may not need to actually do anything. eg, an async https request will still work on lime+js even is lime does not know anything about it. This is like how the haxe.Timer class just works on flash. But waxe will want to sit in a "GetMessage" loop on windows if nothing is pending, and not have to set a 10ms timer that does nothing. So a lime heartbeat will be all that is required to support this on some level. If you want a more efficient version, you may choose to implement "asyncWake" at a later date. The MainLoopLib can support both modes easily enough - just poll the library irrespective of the requested interval or any async wakes.
I see this a a standard for how async libraries can interact with frameworks. If the haxe classes (eg, http, timer) are moved over to only use this I would gladly ditch the nme timer implementation.
As for sys.io.File.getContents on android, I think it is easy enough to override this with your own class path. The haxe.io.Input classes are well-defined enough a user of your library need not ever know. I think this is a different story, but maybe what you are talking about is something like: haxe.Framework.getContents, which by default calls the sys.io version, but via some registration mechanism, allows a framework to present some kind of virtual file system. I like this idea as a separate initiative.
So "keep me posted" and "call me, maybe?" callbacks. If this were formal, I would prefer if there was a core implementation class such as sys._framework.SysTimer
or something that lives behind the user class, hidden to the user. If it were meant to be overridden, we could implement these functions without overriding the whole user classes as we do now
Ok, I've setup a first implementation proposal, please review it there: https://github.com/HaxeFoundation/haxe/pull/5017
Regarding the MainLoop API:
If MainEvent/EntryPoint are referenced, an EntryPoint.run() call will be added immediately after the Main.main() by the Haxe compiler. We could improve this by checking if MainEvent survives DCE.
Looking forward for your feedback/patches, I'll merge for 3.3 RC in two days.
I still don't think this has to be in the standard library... especially not while it's still being discussed.
@Simn it is absolutely necessary to have this in standard library since it requires both compiler support and standardized API. We can still provide updates to the API on haxelib between 3.3 and 3.4 if we really need to introduce additional features in the meanwhile, but its place is in haxe std.
These std-specific hacks in the compiler sources are terrible. I want to get rid of them and you add more instead... This should be designed properly and come with a suitable interface, not be hardcoded to a specific type.
Standardization can be achieved with a haxelib as well. Whether or not something is in the standard library or a HF-maintained haxelib is just a matter of distribution.
@Simn we can always improve the implementation details afterwards, I think we need to advance first and get things done. A lot of people (including me) prefer to keep the number of haxelib dependencies to the minimum, having things in haxe std helps there.
I think a lot more people prefer to be able to receive updates and bugfixes easily without having to wait a year for a new Haxe release. It's one thing to add a proven concept/implementation to the standard library, but what we have here is still very unstable and experimental. The natural approach is to work on this as a haxelib, then maybe add it to the standard library once it stabilized.
I tend to agree with Simn's caution on this, the wrong javascript implementation could be quite harmful to Haxe, I suspect for C++ it's less likely to be quite so important to get perfect. My understanding is this change would effect pretty much every codebase it's not something that a user can easily not use under the current proposal and it runs every frame? You invented haxelib's for these types of decisions !
@Justinfront if you don't use MainLoop, it will not affect your JS code.
It is interesting to see better what you have in mind. Would it be possible to detect if a user has opted into using a MainLoop or not?
Similar to the EntryPoint.run
that you have set up (but after creating a Lime Application
populated with the user's configuration), the application starts and ends with var exitCode = application.exec ();
However (and this is the key difference) all events from then on are signals, such as:
application.onUpdate.add (myUpdateCallback)
application.window.onResize.add (myResizeCallback);
There is no loop to intercept. I would prefer the native loop to remain in native code, at least for default users. We could make concessions to push more control into Haxe, but I would prefer it be opt-in rather than required by default.
This is why I was wondering about use cases, I was not sure when a user would need to control something lower-level than this :smile:
Would it be possible to detect if a user has opted into using a MainLoop or not?
Yes, if we detect that MainLoop is not DCE, then it means it's used by the application
I would prefer the native loop to remain in native code, at least for default users.
It is fine, you can simply add your definition of EventLoop in OpenFL, with an empty run(), and have your OpenFL native loop call EventLoop.processEvents() (or something else) as you wish.
I think this is very good. I would say that "runinmainthread" should also call wakeup at the end. Almost all of the "EntryPoint" code I would reuse in NME and waxe. The only function I would want to replace is "wakeup", so an architecture that did not only use class-path-override would be good. Or make the api simply use an instance so I can override this one function, or use dynamic function for wakeup or whatever. Class-path-override is nice because it will work with old versions (class will never be missing), but some split here with a "default implementation" would be good for code-reuse. You get the idea.
I am now thinking an std lib would be good. This prevents multiple standards - a very serious issue that should not be underestimated. And I am quite happy for hxcpp command-line and nme and waxe to use this. Especially if then we have and haxe.http.async and similar socket apis. A 20-line webserver written in haxe would be a compelling use-case.
As for compiler hacks, read/write access to the type of the "main" class should allow an "onGenerate" macro to solve this issue nicely and be useful in a broader context (eg, a dummy main that adds an instance to the stage in nme's flash-like api)
On Sun, Apr 3, 2016 at 4:54 AM, Nicolas Cannasse notifications@github.com wrote:
Would it be possible to detect if a user has opted into using a MainLoop or not?
Yes, if we detect that MainLoop is not DCE, then it means it's used by the application
I would prefer the native loop to remain in native code, at least for default users.
It is fine, you can simply add your definition of EventLoop in OpenFL, with an empty run(), and have your OpenFL native loop call EventLoop.processEvents() (or something else) as you wish.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-204803254
@hughsando thanks Hugh, I have added the missing wakeup() and also only wakeup() when we terminate the last thread
It's a bit hard to use an instance, because it needs to be created before EntryPoint.run is called. For example we could move all the implementation from EntryPoint to MainLoop (expect run() of course), then simply have:
class EntryPoint {
public static var loop : MainLoop = new MainLoop();
public static function run() { .... loop.processEvents() .... }
}
So your custom implementation would replace it with:
class MyCustomLoop extends MainLoop {
// overrides
}
class EntryPoint {
public static var loop : MainLoop = new MyCustomLoop();
public static function run() { .... loop.processEvents() .... }
}
What do you think about it?
Also, would like to hear comments of @underscorediscovery @waneck @RobDangerous @jgranick on the proposed implementation. We don't have much time before 3.3 RC, and we need to fix eventual problems before 3.3 final (due next month)
Personally I'm quite satisfied with it since it seems to cover all the expressed needs of this thead.
Fine with me. Now please add some insane feature so we can run code in worker threads in js and in regular threads in cpp in a portable way.
I have made another update: nextRun is no longer Null, and I sort the events by priority+nextRun before processing them, which guarantee that if you start a Timer of 100ms and another of 200ms, then block 300ms doing some calculus, the 100ms timer will run before the 200ms one.
@RobDangerous sure, that could be interesting, I actually have something like that for Flash workers, but it's a bit hard to pass functions pointers, maybe with a bit of macros help, but then you loose your context (this, captured locals, etc.) : it's still an open problem to do it gracefully, because you cannot share values. I guess a WorkerJob class (only serializing an initial state) and adding inter-worker communications is the best, it might indeed be built on top of MainLoop for notifications.
Ok, last call for comments before we merge. Speak now if you have any concern that might prevent you integrating MainLoop in your framework :)
My concern is that your PR currently fails on several targets.
Hold on, this is not a current build.
Yes looks good. Rethinking how NME might implement this, I can ignore the thread counting so there is probably not that much code to share. The only thing - what did you think to the idea of making a generic replace-main-class macro, rather than the EntryPoint specific change?
On Thu, Apr 7, 2016 at 8:29 PM, Simon Krajewski notifications@github.com wrote:
Hold on, this is not a current build.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-206857556
There is no reason to limit this to one global class object - please take a look at libevent, libev, libuv, boost::asio etc. - all of them permit several independent loops, none of them force the user to use one global "instance" for all event-y stuff. Especially since this is supposed to be a standard, let's get that right, please. Which has already been noted by @waneck, @mcheshkov, @Simn from a proper-design perspective, maybe more, and as already acknowledged by @ncannasse in reply to @waneck's comment on this.
The problem I see with this is that there's no good way on HOW people are going to override this. I can imagine that some library could just add their own file that takes precedence over the std implementation, but this starts to get bad if we have more than one library that do that
Uhm, ok. I apologize for my prior tone. I do not wish to discuss the particular issues of this proposal, as there is certainly not enough time and it also seems that you have made up your mind anyway and can't be dissuaded by words of caution, which is what upset me in the first place. I think the way you trying to enforce it is a reckless abuse of your power, that will most certainly not lead to a standard that the community will be willing or able to adopt, but what's most important is that it is not at all necessary.
I have made a poc that provides the integration you seem to be wanting:
If entrypoint.EntryPoint.run
is called explicitly, then no auto-wrapping occurs, otherwise it does. All without relying on DCE, hacking stuff into the compiler or whatever. As always, there is room for improvement, but it's certainly not worse than the changes to the compiler that @Simn has criticized.
FWIW, here is the test: https://github.com/back2dos/entrypoint/blob/master/tests/RunTests.hx#L3
Output with -D manual
: RunTests.hx:21: [main,before,manual,after]
Output without: RunTests.hx:21: [before,main,after]
With this, I once again urge you to let this mature in a separate library. If you need help with that, I will gladly provide it, but your current line of action is downright wrong and I protest. Make of that what you will. That's all I have left to say on the matter in this poorly chosen context.
I think any method is going to have to resolve something. Ultimately exactly one bit of code is going to "sleep" until the next event on the main thread.
The person who overrides is the framework - either the one who creates the window, or the one who creates the command line (command line probably needs no work, although not sure how this works for Node.js ). The override system can work - this is actually already handled because this is exactly the case with overriding haxe.Timer. waxe and nme both do this. Waxe wins be design when both are present since it "owns" the main loop (it created the window)
The other libraries mentioned would be the clients of the main loop. If you could act as either, maybe you check to see if the developer has given priority to the other one, and the add yourself as a client instead. (Again, round about how waxe/nme interact, although the nme build tool arranges everything).
So the libuv or whatever would be a client of the main loop. Nme would own the main loop, and both window and async events can happily co-exist.
On Thu, Apr 7, 2016 at 10:09 PM, Cauê Waneck notifications@github.com wrote:
The problem I see with this is that there's no good way on HOW people are going to override this. I can imagine that some library could just add their own file that takes precedence over the std implementation, but this starts to get bad if we have more than one library that do that
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-206923611
@hughsando but how would this priority system work? Will it depend on the user including libs in a specific order? I can't think of a way that this would not be fragile with the current implementation
I think booting off a macro is preferable to a compiler change. But I think initiating by simply using a class (EntryPoint) (perhaps indirectly - eg, haxe.http) is better than doing it via a "-lib entrypoint".
On Thu, Apr 7, 2016 at 11:10 PM, Juraj Kirchheim notifications@github.com wrote:
Uhm, ok. I apologize for my prior tone. I do not wish to discuss the particular issues of this proposal, as there is certainly not enough time and it also seems that you have made up your mind anyway and can't be dissuaded by words of caution, which is what upset me in the first place. I think the way you trying to enforce it is a reckless abuse of your power, that will most certainly not lead to a standard that the community will be willing or able to adopt, but what's most important is that it is not at all necessary.
I have made a poc https://github.com/back2dos/entrypoint that provides the integration you seem to be wanting:
If entrypoint.EntryPoint.run is called explicitly, then no auto-wrapping occurs, otherwise it does. All without relying on DCE, hacking stuff into the compiler or whatever. As always, there is room for improvement, but it's certainly not worse than the changes to the compiler that @Simn https://github.com/Simn has criticized.
FWIW, here is the test: https://github.com/back2dos/entrypoint/blob/master/tests/RunTests.hx#L3
Output with -D manual: RunTests.hx:21: [main,before,manual,after] https://travis-ci.org/back2dos/entrypoint/jobs/121444756#L169 Output without: RunTests.hx:21: [before,main,after] https://travis-ci.org/back2dos/entrypoint/jobs/121444756#L164
With this, I once again urge you to let this mature in a separate library. If you need help with that, I will gladly provide it, but your current line of action is downright wrong and I protest. Make of that what you will. That's all I have left to say on the matter in this poorly chosen context.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-206945988
If you are writing a framework that must play nice with another framework the authors will need to communicate. But lets be practical here. Exactly one thing does this that I am aware of. And it is still clear - the guy who creates the window owns the event loop. So if you are using a framework's tool to open a window, the framework's tool can ensure that they are in the right position on the class path. Do you have some thoughts on frameworks you have written ? eg, Game engine integration. The game creates the windows, so they would own the loop.. If you wanted NME to do some UI rendering over the top of the games 3d display, the game engine would need to co-ordinate this and it should be easy to ensure they owned the loop. Nme would be running in "client mode" where events are pumped in, and nme does not block.
On Thu, Apr 7, 2016 at 11:27 PM, Hugh Sanderson gamehaxe@gmail.com wrote:
I think booting off a macro is preferable to a compiler change. But I think initiating by simply using a class (EntryPoint) (perhaps indirectly - eg, haxe.http) is better than doing it via a "-lib entrypoint".
On Thu, Apr 7, 2016 at 11:10 PM, Juraj Kirchheim <notifications@github.com
wrote:
Uhm, ok. I apologize for my prior tone. I do not wish to discuss the particular issues of this proposal, as there is certainly not enough time and it also seems that you have made up your mind anyway and can't be dissuaded by words of caution, which is what upset me in the first place. I think the way you trying to enforce it is a reckless abuse of your power, that will most certainly not lead to a standard that the community will be willing or able to adopt, but what's most important is that it is not at all necessary.
I have made a poc https://github.com/back2dos/entrypoint that provides the integration you seem to be wanting:
If entrypoint.EntryPoint.run is called explicitly, then no auto-wrapping occurs, otherwise it does. All without relying on DCE, hacking stuff into the compiler or whatever. As always, there is room for improvement, but it's certainly not worse than the changes to the compiler that @Simn https://github.com/Simn has criticized.
FWIW, here is the test: https://github.com/back2dos/entrypoint/blob/master/tests/RunTests.hx#L3
Output with -D manual: RunTests.hx:21: [main,before,manual,after] https://travis-ci.org/back2dos/entrypoint/jobs/121444756#L169 Output without: RunTests.hx:21: [before,main,after] https://travis-ci.org/back2dos/entrypoint/jobs/121444756#L164
With this, I once again urge you to let this mature in a separate library. If you need help with that, I will gladly provide it, but your current line of action is downright wrong and I protest. Make of that what you will. That's all I have left to say on the matter in this poorly chosen context.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/HaxeFoundation/haxe/issues/3075#issuecomment-206945988
To follow up on my previous comment: without the option for several event loops, there's no way to avoid using expensive locking mechanisms like mutexes, even if the application itself has no need for them at all. Even when several threads are used, it depends on the application whether there's any need for locks, whether e.g. inter-thread communication with lock/wait-free queues can be used instead, etc.
The currently proposed design makes that impossible simply by forcing everything to use a single global event loop.
Examples are: stream processors that only share immutable state and hence have no need for locking mechanisms; any sort of server application where each connection/session/room maintains its own state independently from others; any sort of application where different sets of events have different scheduling/prioritizing requirements so that one set can simply be processed in FIFO order while others require ordering, etc.
@hughsando's recent comment makes me wonder whether I'm getting something wrong here, is this whole issue about event loops involving GUIs only? Not everything that needs an event loop has a GUI. Rather, GUIs are a fairly special case where the above mentioned scenarios with many basically equal, parallel instances of independent event loops rarely come up (it seems).
@hughsando Also, if this is supposed to be an abstraction over event loops, wouldn't it make sense to allow "clients" (like libuv in your scenario) to expose the same interface this abstraction provides? Otherwise users would have to program against libuv's interface instead, losing the benefit of this abstraction.
Regarding the various comments:
a) if standardizing things to make sure people can build things in a crossplatform manner whatever the framework they are compiling to is called "abuse of power", then I should maybe abuse more of often of that power so we don't end up with tens of redefinition of something such as haxe.Timer. For instance we standardized Bytes at some point in Haxe history and I don't think anybody complained about it.
b) we don't need several frameworks to have conflicting implementations. As Hugh stated, the window owner is responsible for overriding the EntryPoint definition with its own. Many frameworks already do that for haxe.Timer, so there is nothing new here.
c) having the EntryPoint called by compiler is a temporary measure. Ideally we would like to have some macro code doing that, without requiring the end-user to make an explicit call. EntryPoint.run() will either return immediately on some platforms or will block forever on some other, making it in all cases necessary to call it at the end main() anyway. We already have some kind of compiler-specific logic for several other APIs, such haxe.Resource, again nothing new here
d) regarding several events loops, this kind of mechanism can and should be build on top of MainLoop to be crossplatform, or simply use the native platform APIs to do something else before returning from main() if you prefer.
e) regarding mutexes, etc : there's not much of it, and we could surely optimize if you have not created any thread yet. That's not performance critical atm so it's fine. Also, using the MainLoop does not prevent you from using directly the platform specific API
without requiring the end-user to make an explicit call
What would be so bad about that?
@Simn what is so bad about automating it ?
Automation doesn't require to put a call to be put at the end of main which is the only actual place it would belong and everybody would have to put it there anyway, leading to many new comers not doing it and complaining that their app exit immediately.
I'm not saying that we can't automate it eventually, I'd just prefer not to make some compiler hack for this if we can get started without it. That would also remove the necessity to have this in std right away and give us some more time to develop.
Yes, but it push back the standardization to 3.4, since nobody will be able to rely on it, and i don't see framework owners adding a dependency to a library that is meant to disappear. I prefer that we do something for 3.3, which can be improved later. Again if you have specific remarks regarding the interface, please tell
I don't want much to hear that we should do X better if only we had Y, because that's not being constructive (unless you provide a PR for Y of course :) )
i don't see framework owners adding a dependency to a library that is meant to disappear
What do you base that on? There are at least two framework owners (Juraj and Sven) in this thread who advocated putting it in a haxelib. You seem to be the only one who's adamant about adding it to std right now (though I'm not entirely sure what Hugh's position on that is).
I don't want much to hear that we should do X better if only we had Y, because that's not being constructive (unless you provide a PR for Y of course :) )
How am I supposed to provide a PR for something I don't want to be added?
@ncannasse
Regarding d) How does an API like haxe.MainLoop.hasEvents()
, haxe.MainLoop.add(..)
, haxe.MainLoop.addThread(..)
, etc. which, correct me if I'm wrong, is the current implementation, allow for having several event loops? Will it magically choose the one the user has in mind?
Regarding e) the performance critical issues directly follow from having only one central event loop. Many applications can be designed so that there's no need for locks at all, and certainly not for simply adding an event, regardless of the number of threads. The current API enforces locks for all but the purely single-threaded case.
What I'm talking about is having e.g. several threads each with an event loop completely independent from the other threads. If I'm not mistaken I remember an instance where you yourself used that design in Tora (isn't that right?). The proposed API doesn't permit that. That's simply not acceptable.
Each platform comes with its own “main event loop” implementation, making it hard to build crossplatform API over it.
We shoul provide a base abstract loop definition that can help this.
Discussion opened on the topic.