akka / akka-http

The Streaming-first HTTP server/module of Akka
https://doc.akka.io/docs/akka-http
Other
1.34k stars 595 forks source link

Akka HTTP configuration details exposed to client on EntityStreamSizeException #1015

Open makubi opened 7 years ago

makubi commented 7 years ago

Hi,

The current implementation of the EntityStreamSizeException handling exposes a detailed message about which Akka settings to use (to allow bigger contents in requests) to the client, even though verbose-error-messages is set to off.

I changed the Class to extend ExceptionWithErrorInfo to distinguish between a summary and a detailed message. There is a check for the verbosity setting in HttpServerBluePrint:459 but it seems like it is never called and the ErrorInfo.formatPretty result is returned to the client (which may also just be the result of RuntimeException.getMessage as ExceptionWithErrorInfo extends RuntimeException).

Do you have an idea why or what to change to only display the summary in that case? The mentioned classes are in the akka-http-core module.

You can trigger it by setting akka.http.server.parsing.max-content-length to 1 and send some data with your request.

I pushed my current changes to https://github.com/makubi/akka-http/commit/546af4fa4ac4c5c7e78df2c8b3ba083aa64b3c2e.

Thanks in advance.

jonas commented 7 years ago

I am not sure we can change the parent class without breaking binary compatibility, so alternatively we might have to put in place something to avoid exposing the error message in the response.

jonas commented 7 years ago

With the following code I was only able to see the EntityStreamSizeException in the server log but not on the client side so I will need some help to reproduce it.

import akka.http.scaladsl.settings.ServerSettings
import akka.http.scaladsl.server._
import akka.event.Logging
import com.typesafe.config.ConfigFactory

object Main extends HttpApp {
  override protected def route =
    logRequestResult("!!!", Logging.InfoLevel) {
      extractMaterializer { mat =>
        extractRequest { r =>
          onComplete(r.discardEntityBytes(mat).future) { done =>
            complete("ok")
          }
        }
      }
    }

  val maxContentLength = ConfigFactory.parseString("akka.http.server.parsing.max-content-length = 1")
  val settings = ServerSettings(maxContentLength.withFallback(ConfigFactory.load))
  def run() = startServer("localhost", 7357, settings)
}
Main.run()

Gives the following output

[INFO] [04/07/2017 00:55:36.693] [readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain-akka.actor.default-dispatcher-4] [akka.actor.ActorSystemImpl(readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain)] Server online at http://localhost:7357/
[ERROR] [04/07/2017 00:55:41.776] [readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain-akka.actor.default-dispatcher-5] [akka://readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain/user/StreamSupervisor-27/flow-2-0-unknown-operation] Error during preStart in [akka.http.scaladsl.model.HttpEntity$Limitable@7d20d1d4]
EntityStreamSizeException: actual entity size (Some(456933529)) exceeded content length limit (1 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream.
    at akka.http.scaladsl.model.HttpEntity$Limitable$$anon$1.preStart(HttpEntity.scala:607)
    at akka.stream.impl.fusing.GraphInterpreter.init(GraphInterpreter.scala:520)
    at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:380)
    at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:538)
    at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:586)
    at akka.actor.Actor$class.aroundPreStart(Actor.scala:505)
    at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:529)
    at akka.actor.ActorCell.create(ActorCell.scala:590)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
    at akka.dispatch.Mailbox.run(Mailbox.scala:223)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [04/07/2017 00:55:41.785] [readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain-akka.actor.default-dispatcher-2] [akka.actor.ActorSystemImpl(readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain)] !!!: Response for
  Request : HttpRequest(HttpMethod(POST),http://localhost:7357/,List(Host: localhost:7357, Accept-Encoding: gzip, deflate, User-Agent: HTTPie/0.9.6, Accept: */*, Connection: keep-alive, Timeout-Access: <function1>),HttpEntity.Default(multipart/form-data; boundary=9623404c2aff414aa5a88f6d3572f82c,456933529 bytes total),HttpProtocol(HTTP/1.1))
  Response: Complete(HttpResponse(200 OK,List(),HttpEntity.Strict(text/plain; charset=UTF-8,ok),HttpProtocol(HTTP/1.1)))
[WARN] [04/07/2017 00:55:41.786] [readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain-akka.actor.default-dispatcher-2] [akka.actor.ActorSystemImpl(readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain)] Sending an 2xx 'early' response before end of request was received... Note that the connection will be closed after this response. Also, many clients will not read early responses! Consider only issuing this response after the request data has been completely read!
[INFO] [04/07/2017 00:55:41.788] [readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain-akka.actor.default-dispatcher-9] [akka://readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain/system/IO-TCP/selectors/$a/1] Message [akka.io.Tcp$ResumeReading$] from Actor[akka://readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain/user/StreamSupervisor-27/$$b#629293499] to Actor[akka://readiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwiwMain/system/IO-TCP/selectors/$a/1#1495298281] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
makubi commented 7 years ago

Thank you for your detailed feedback, I will give it a try.

How do you think a binary compatibility issue may occur?

What do you think about changing the default behavior to still return HTTP 413 but with a short description?

ktoso commented 7 years ago

How do you think a binary compatibility issue may occur?

You changed a constructor of a case class there, so anyone who matched on it is now broken basically. This is possible to work around, but it's a pain - we can do this less painful by making a second class with the info.

ktoso commented 7 years ago

Would you want to PR that? Thanks in advance

makubi commented 7 years ago

Thanks for your feedback!

You changed a constructor of a case class there, so anyone who matched on it is now broken basically. This is possible to work around, but it's a pain - we can do this less painful by making a second class with the info.

I don't get how it breaks binary compatibility. Of course it breaks your compilation, but if it is never thrown, that may be what you want to accomplish. Deprecating the old one and throwing a new one changes the behavior and the user may not even notice that change until runtime.

If I missed something, please let me know.

Would you want to PR that? Thanks in advance

Sure, I would like to, but as originally written my current approach still exposes the summary to the user, at least in my tests.

raboof commented 6 years ago

I don't get how it breaks binary compatibility.

@makubi The problem is that since this is a case class, people may have been using the generated unapply methods. Changing the EntityStreamSizeException would change the unapply method that gets generated, breaking binary compatibility with any code that used the old version.

The change itself still seems interesting though, would you be interested in picking it back up?