softwaremill / sttp

The Scala HTTP client you always wanted!
https://sttp.softwaremill.com
Apache License 2.0
1.44k stars 301 forks source link

The client freezes when timeout is reached. #217

Closed walmaaoui closed 5 years ago

walmaaoui commented 5 years ago

The client freeze infinitely when timeout is reached after raising java.lang.IllegalStateException: HashedWheelTimer.stop() cannot be called from TimerTask. We tested sttp 1.5.1 and 1.5.17 and we had the issue with both of them. We are on scala 2.11 and we use these dependencies:

 val sttpCore       = "com.softwaremill.sttp" %% "core"                           % "1.5.17"
 val sttpCirce      = "com.softwaremill.sttp" %% "circe"                          % "1.5.17"
 val sttpAsyncCats  = "com.softwaremill.sttp" %% "async-http-client-backend-cats" % "1.5.17"
Capture d’écran 2019-05-29 à 17 39 55
adamw commented 5 years ago

As I see from the stack trace, this is when closing the backend (and the client)? Could you share some code which would reproduce the issue?

walmaaoui commented 5 years ago

Sorry for the delayed response, it seems that the problem occurs only when using Resource or Bracket.

This code hangs infinitely

object Test extends App {
  import cats.effect.Resource
  private lazy val jhClientResource: Resource[IO, SttpBackend[IO, Nothing]] =
    Resource
      .make {
        IO(AsyncHttpClientCatsBackend[IO]())
      } { backend =>
        IO(backend.close())
      }

    jhClientResource
      .use { client =>
        sttp
          .get(uri"http://localhost:8000")
          .readTimeout(1 millisecond)
          .response(ResponseAsString("utf-8"))
          .send()(client, implicitly)
      }
      .unsafeRunSync()
}

This code works as expected, the timeout is raised then the app is terminated

import cats.effect.IO
import scala.concurrent.duration._
import com.softwaremill.sttp._
import com.softwaremill.sttp.asynchttpclient.cats.AsyncHttpClientCatsBackend

object Test extends App {

  implicit private val jhClientResource = AsyncHttpClientCatsBackend[IO]()
  try {
    val _ = sttp
      .get(uri"http://localhost:8000")
      .readTimeout(1 millisecond)
      .response(ResponseAsString("utf-8"))
      .send()
      .unsafeRunSync()

  } finally {
    jhClientResource.close()
  }
}

To test locally I used your simple-http-server https://github.com/softwaremill/simple-http-server and added a 1 second sleep in the code to trigger the timeout.

YannMoisan commented 5 years ago

Hi, I'm working with @walmaaoui .

After further investigation, it seems related to IO usage.

The release code of the resource is executed by the same thread that throws the TimeoutException : the timer thread [AsyncHttpClient-timer-1-1]. This release code calls backend.close() that calls HashedWheelTimer.stop() but netty do not support a call to this method from a timer thread.

A workaround we've just found is to shift the call to backend.close() to another thread.

val contextShift           = IO.contextShift(global)
IO.shift(contextShift).map(_ => backend.close())

So, this issue doesn't seem related to sttp but to the usage of IO with AsyncHttpClient.

What do you think ?

adamw commented 5 years ago

Thanks @YannMoisan & @walmaaoui for the investigation - nice find! :) I'm not sure if sttp can do anything to prevent these kinds of problems, apart from documentation. The AsyncHttpClientCatsBackend only has access to an Async typeclass instance, and no other execution contexts.

Btw. take a look at:

I think a solution might be to shift to the main EC after calling send() - you probably don't want to run any logic on netty's timer thread. Or maybe this should be part of send() itself, what do you think?

adamw commented 5 years ago

I'm more and more convinced that the AsyncHttpClientCatsBackend backend should take an additional ContextShift implicit, and shift to it immediately after the request is sent. Otherwise it's all too easy to do work on the netty thread.

YannMoisan commented 5 years ago

If I understand correctly, the send() and the timeout exception do not happen in the same thread.

So, if we shift after send(), there is still the issue (cf test done with the snippet below).

        resp <- sttp
          .get(
            uri"http://localhost:8080"
          )
          .readTimeout(1 millisecond)
          .response(ResponseAsString("utf-8"))
          .send()
        //  }
        _ <- contextShift.shift
      } yield resp
adamw commented 5 years ago

@YannMoisan ah yes, if there's an exception, the shifting never happens. So I think both should be done: shifting after the action, and in case of an exception, before the close?

adamw commented 5 years ago

Both can be done by sttp, if the cats backend takes a context shift I suppose

adamw commented 5 years ago

Or maybe guarantee would do the trick: sttp.get.(...).send().guarantee(contextShift.shift) ?

YannMoisan commented 5 years ago

guarantee can do the trick, but I'd prefer to use Resource to centralize acquisition and release at the same place.

I don't know if we can do it in sttp due to the signature : def close(): Unit and not def close(): R[Unit].

It's possible to write a helper that shift the release code. I don't know if it could be interesting to have that in cats-effect.

object ShiftedReleaseResource {
  def make[A](
      acquire: IO[A]
  )(release: A => IO[Unit])(implicit contextShift: ContextShift[IO]): Resource[IO, A] =
    Resource.make(acquire)(a => IO.shift(contextShift).flatMap(_ => release(a)))
}
adamw commented 5 years ago

Maybe I'm missing something, but isn't the root of the problem that you end up calling .close() from the netty thread in the first place? That is, we should never run anything on the netty thread. So the proper solution would be to escape from it whenever we have a chance?

YannMoisan commented 5 years ago

You're right !

I've just tested with guarantee, it works and this is still compatible with Resource. (I thought you suggest to call .close() inside guarantee)

adamw commented 5 years ago

Should be fixed in 1.6.0: https://github.com/softwaremill/sttp/releases/tag/v1.6.0