Open zostay opened 9 years ago
I have been working on a HTTP parser that generates streams of events from an HTTP/1.0 or HTTP/1.1 connection to explore this. Unfortunately, it is very difficult to test this code because Perl 6 threads that deal with I/O have proven to be pretty unreliable for me. I'm working with #perl6 to try and get these issues resolved.
I ran into a problem when using a number of IO::Notification.watch-path supplies in a long-running process, but from what I gathered on #perl6 it might be a general problem with supplies in long-running processes. It seems that there might be a bug in the garbage collector that can cause the program to stall without crashing. jnthn was looking into it but has had limited time recently.
My particular problem is related in RT #126774 Code aborts while working with supplies. It is not a long running process. It is does not fail in the same spot each time, but it very consistently fails very very quickly. I'm pretty well convinced it is a problem in libuv in the way it garbage collects async I/O threads. Since we're using a very old libuv in MoarVM (2 years old IIRC), we really need to try and update it. I've been pushing for that, but I'm not in #perl6 very often to advocate.
I should say, it's either a GC bug in libuv or a problem with the way memory is allocated and the problem is caused via action-at-a-distance. Hard to say. I'm not familiar enough with the guts of MoarVM or libuv to be efficient at nailing such problems down.
I have worked out the issue. The Perl 6 bug still exists, but I no longer need it to be fixed to keep moving, now that I figured out what was going wrong. HT: @hoelzro for figuring out the issue and giving me the clues on how to pin down what was actually wrong in my code.
As such, I am in the process of getting Smack up and running again, so I should be able to get a proof of concept for the currency stuff going that lets me stress test things and look for weaknesses.
Okay, so I've been playing with this in a tiny implementation which strips away everything, but the async bits so I can focus on just the consequences of concurrency and my first conclusion is this: live supplies are trouble.
If we imagine an application like the following, writing middleware for it is very, very tricky:
sub app(%env) {
start {
my Supplier $content .= new;
start {
await %env<p6w.ready>;
emit 'one';
emit 'two';
emit 'three';
}
200, [ Content-Type => 'text/plain' ], $content.Supply
}
}
Imagine applying ContentLength middleware, which taps the body, measures the length of it in bytes, adds that to the Content-Length header, and then returns the (now buffered) body as a new supply.
In order for that to work, we now have to replace p6w.ready
with a new vowed promise that is kept by the middleware. We need to scan the body within the middleware's own start block and/or .then() method, we need to wait for the inner supply to finish, and then return the entire response. I believe it should be doable, but getting all the Promises and such in the correct order is very delicate. I have not gotten it to work in my golfed-down mockup. At the very least, there will need to a be a subroutine or something that we can use to prevent the middleware authors from having to delicate align the stars to succeed here. However, I'm starting to doubt the utility of permitting live supplies. It will probably just be easier to say the spec requires on-demand supplies and the behavior when live supplies are used is undefined.
On the other hand, on-demand supplies are not problem free. Consider:
my $side-effect = 0;
sub app(%env) {
start {
200, [ Content-Length => 'text/plain' ], supply {
emit 'one';
emit 'two';
emit 'three';
$side-effect++;
}
}
}
On-demand supplies are basically subroutines that run every time they are tapped. That means that it is very easy for middleware to be written badly like this:
sub broke-mw(%env) {
callsame.then({
my $res := .result;
with $res[2] -> Supply() $content {
note "FIRST CHUNK ", await $content;
}
$res;
});
}
&app.wrap(&broke-mw);
If we run this app, the $side-effect
will be 2 by the end because the body was called twice. The correct implementation of the middleware is something like this:
sub fixed-mw(%env) {
callsame.then({
my $res := .result;
|$res[0, 1], supply {
with $res[2] -> Supply() $content {
my $first = True;
my $buffer = $content.list.Supply;
whenever $buffer -> $chunk {
note "FIRST CHUNK ", $chunk if $first--;
emit $chunk;
}
}y
}
});
}
&app.wrap(&fixed-mw);
That $buffer
in the fixed version is pretty important to avoid multiplying application (or nested middleware) side effects. It also means we could easily end up with multiple buffers, one for each middleware in use that needs to read the content ahead of time. That's a memory explosion we probably want to avoid, but then the performance savvy devop is just going to avoid that kind of middleware whenever possible anyway because even in a synchronous PSGI kind of implementation, those implementations are costly to performance.
So the verdict seems to be that the pie-in-the-sky, untested thinking I've put into this standard so far works great for apps, is mostly reasonable for servers, but makes middleware authoring tricky. The solution is probably mostly the same as was needed for PSGI: provide better tooling so the tricky bits can at least be encapsulated.
Live + on-demand support as currently included might just be impractical to start and I'm not even sure how important supporting live supplies in the content really even is, at least for request-response protocols. (I can imagine live supplies being more likely in a framed-socket where we are piping some sort of "infinite" event stream across a websocket connection.)
So, here are the toy scripts I have been playing with to test async:
https://gist.github.com/zostay/a415f546df8c32cc236d330326078317 https://gist.github.com/zostay/43a17ef288e5aebf8ee9dd5f7a7003a2
One issue I've had in the back of my mind is the probably of running concurrently with a limited number of threads. It is possible and even likely that someone will want to run an app server on a very small number of threads. These limits might even be imposed per connection or some such in a carefully optimized environment. As such, running start {} for a long running has some risk associated with it that doesn't necessarily exist in a purely synchronous environment.
The built-in scheduler for Perl 6 is very clever so that if you Promise to start 30 blocks of code on a process that is limited to 16 threads (the default in my experience), all 30 blocks of code will run so long as they are not too dependent on one another. If completely independent, the scheduler will run the next one in the wait list as soon as one of the running threads is freed up.
The way this is currently designed, I don't think we have a problem. However, once we have a reference implementation working, I think it would be a good idea to consider how different arrangements of middleware, different server implementations, and different applications may impact the available threads and performance so we can seek out optimizations and/or add best practices documentation that describes the best ways to write servers, middleware, and apps to maximize performance and minimize the risk of running out of threads and such.