guzba / mummy

An HTTP and WebSocket server for Nim that returns to the ancient ways of threads.
MIT License
274 stars 10 forks source link

Sleeping handler causes next request to stall until sleep is done or another request arrives #55

Closed reinefjord closed 1 year ago

reinefjord commented 1 year ago

First of all, thank you very much for this project.

I noticed that if a RequestHandler sleeps, the next request will not be handled until either the first handler returns or another request arrives.

import std/os
import mummy

proc handler(request: Request) =
  echo "new request"
  request.respond(200)
  sleep(10_000)
  echo "done"

let server = newServer(handler)
server.serve(Port(8080))

The first request will result in "new request" being printed and receives 200 immediately. The next request will stall until the sleep is done ("done" is printed), and will then be served. If a third request arrives while the second is stalling, the second request will be handled, instead stalling the third request until any request is done.

guzba commented 1 year ago

I just tested this and am not seeing this but could be misunderstanding. What version of Nim and what platform are you running this on?

I have added a test to more generally confirm this is appears to be working as expected. Currently in a PR but you can see it and the passing Github Actions running it here: https://github.com/guzba/mummy/pull/56

the next request will not be handled until either the first handler returns or another request arrives

I'm not quite sure what this means so let me try to clarify. Do you mean that when the first request sleeps and a second arrives, it will not be handled until a third request?

reinefjord commented 1 year ago
$ nim --version
Nim Compiler Version 1.6.12 [Linux: amd64]
Compiled at 2023-03-10
Copyright (c) 2006-2023 by Andreas Rumpf

git hash: 1aa9273640c0c51486cf3a7b67282fe58f360e91
active boot switches: -d:release

I modified the code slightly to try to visualize what is happening:

import std/[os, strformat, times]
import mummy

proc handler(request: Request) =
  let t0 = now().format("hh:mm:ss")
  echo fmt"[{t0}] new request"
  request.respond(200)
  sleep(10_000)
  let tEnd = now().format("hh:mm:ss")
  echo fmt"[{tEnd}] done, started at {t0}"

let server = newServer(handler)
server.serve(Port(8080))
$ nim c --mm:orc --threads:on --d:release test.nim 

Two requests, one started 1 second after the first one:

$ ./test 
[09:41:22] new request
[09:41:32] done, started at 09:41:22
[09:41:32] new request
[09:41:42] done, started at 09:41:32
$ date +%H:%M:%S; curl localhost:8080; date +%H:%M:%S
21:41:22
21:41:22
$ date +%H:%M:%S; curl localhost:8080; date +%H:%M:%S
21:41:23
21:41:32

Three requests:

$ ./test 
[09:41:59] new request
[09:42:03] new request
[09:42:09] done, started at 09:41:59
[09:42:09] new request
[09:42:13] done, started at 09:42:03
[09:42:19] done, started at 09:42:09

The first request returns immediately:

$ date +%H:%M:%S; curl localhost:8080; date +%H:%M:%S
21:41:59
21:41:59

The second stalls for a few seconds:

$ date +%H:%M:%S; curl localhost:8080; date +%H:%M:%S
21:42:01
21:42:03

Until a third request starts:

$ date +%H:%M:%S; curl localhost:8080; date +%H:%M:%S
21:42:03
21:42:09

Before submitting the issue, I thought that there must be something strange going on with my computer, but I was able to replicate it on another computer. One with Fedora 38 (Linux 6.2.14-300.fc38.x86_64) and one with Void Linux.(6.1.25-1).

Edit: Switched position of t0 and tEnd and made new tests, thought it was a bit confusing.

guzba commented 1 year ago

I can recreate the issue here, thanks for the follow-up info. A quick fix is to pass -d:mummyUseLockAndCond to swap the default thread coordination stuff (it has a different default on Linux). I'll be fixing this on Linux but that should get you the correct behavior right away.

reinefjord commented 1 year ago

Can confirm that mummyUseLockAndCond fixes the issue. Thank you! :)

guzba commented 1 year ago

I have tagged 0.2.16 which consolidates to just using the lock and cond approach everywhere, it is the winner between the two approaches I was working on. Thanks again for reporting the issue!