DelSkayn / rquickjs

High level bindings to the quickjs javascript engine
MIT License
434 stars 59 forks source link

ctx.spawn inside ctx.spawn never polled #223

Closed richarddd closed 9 months ago

richarddd commented 9 months ago

I'm trying to spawn a new task inside an already spawned task. However, the nested spawned future is never polled.

For example, this creates a TcpListener and uses a callback to return the "body".

I cant use regular tasks since i cant move JS Values accross threads (they dont implement send + sync). Persistent used to implement Send but that was removed.

It's used like this from JS:

serve(8080, (data) => new Uint8Array([79,75]));
globals.set(
    "serve",
    Func::from(|ctx, port: u16, callback| {
        struct Args<'js>(Ctx<'js>, Function<'js>);
        let Args(ctx, callback) = Args(ctx, callback);
        ctx.clone().spawn(async move {
            let listener = TcpListener::bind(format!("0.0.0.0:{}", port))
                .await
                .unwrap();

            loop {
                let ctx2 = ctx.clone();
                let callback = callback.clone();
                let (mut stream, _) = listener.accept().await.unwrap();

                ctx.spawn(async move { //this never get invoked
                    let mut buf = Vec::with_capacity(1024 * 64);
                    stream.read_to_end(&mut buf).await.unwrap();

                    let ta = TypedArray::<u8>::new(ctx2, buf).unwrap();

                    let response = callback.call::<_, TypedArray<u8>>((ta,)).unwrap();

                    let bytes: &[u8] = response.as_ref();

                    let headers = b"HTTP/1.1 200 OK\r\n\r\n";

                    let mut vec = Vec::with_capacity(headers.len() + bytes.len());
                    vec.extend_from_slice(headers);
                    vec.extend_from_slice(&bytes);

                    stream.write_all(&vec).await.unwrap();
                })
            }
        })
    }),
)?;
richarddd commented 9 months ago

Update: My misstake, i tired recreating an issue from a bigger project and its the stream.read_to_end that blocks in this case. Closing