perwendel / spark

A simple expressive web framework for java. Spark has a kotlin DSL https://github.com/perwendel/spark-kotlin
Apache License 2.0
9.64k stars 1.56k forks source link

Allow Jetty to be pluggable #137

Open jacek99 opened 10 years ago

jacek99 commented 10 years ago

It would be great to break apart spark-core from Jetty (spark-jetty?).

For example, in the latest TechEmpower web benchmarks. raw HTTP performance from Undertow was nearly 3 times higher than Jetty:

http://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=json

it would be great to try to plug in a different web server into Spark and see the performance difference.

Also separating core from Jetty would make it easier to embed in a more robust container like Dropwizard (which gives you YAML-based configuration, etc).

davsclaus commented 10 years ago

Yes I really think the transport layer should be pluggable too. Currently in spark-core there is JettyLogger in the root package which prevents using spark-core without Jetty at all. And the MatcherFilter is unfortunately in the same package as JettyHandler.

But having a spark-core and spark-jetty module to separate the codebase would be awesome.

It also makes using servlet engines from Tomcat, Wildfly, Netty etc much easier to implement and use.

Btw we are currently working on a camel-spark component http://camel.apache.org/spark

jacek99 commented 10 years ago

Yes, I would love to plug Spark instead of JAX-RS into Dropwizard and see how it fares in performance testing.

perwendel commented 10 years ago

I concur that it would be nice to make the web server pluggable.

However, breaking out Jetty would (I guess? I'm pretty tired an my brain isn't fully functional) introduce additional steps to get Spark working. How can this be avoided since I'm reluctant to any changes that add extra steps for the user? This since I think a large majority of the users use Spark due to its simplicity.

davsclaus commented 10 years ago

Maybe have a spark-no-jetty JAR (or maybe a better name would be spark-core, or spark-bare or some other name) for a module that does not include Jetty.

Then its either for end users to include these JARs on the classpath

Or if you still want 1 JAR for spark with Jetty (as today)

Yeah I guess it requires some work from you/spark team to support both worlds. But it would open doors for Spark to be used by many more.

andreipet commented 10 years ago

+1 Yes. 'spark' module should provide same interface as before, so it should embed Jetty. The end users will use 'spark' module which in turn should depend on 'spark-core' module. 'spark-core' module should contain the soul of spark. :) Obviously, the author should choose the names for the 2 modules :)

diegooliveira commented 10 years ago

@perwendel I think a "server discover" feature is easy to implement and using the @andreipet separation idea it wont break the current "modus operandi" for more simple use case and allow a more tunable configuration for advanced users.

kliakos commented 9 years ago

+1 for this.

When I deploy Spark to Tomcat, it requires the Jetty Utils jar. It would be great if any of the Jetty jars was required at all.

jacek99 commented 9 years ago

I would love to integrate Spark into Dropwizard, it already has a ton of stuff that is missing in Spark (e.g. integrated logging, YML based config, admin APIs on a separate port, etc),.

If would be great to be able to integrate Spark into Dropwizard and replace its JAX-RS Jersey. Spark really shines with Java 8 and functions, it's like a breath of fresh air.

jgangemi commented 9 years ago

i started working on this in a fork (server branch) - very much a work in progress at the moment but feedback is welcome.

right now pulling in external resources via the servlet filter is handled by some code dependent on jetty bits. i think the better thing to do in the case of using tomcat, etc is to use the ServletContext to get at any external resources rather then relying on the custom code.

doing this would mean that any files you wanted to access in an external location would need to be defined according to what ServletContext#getResource(String) defines.

another idea would be to keep the current custom implementation and swap out the jetty bits w/ a delegate. doing that would require ppl deploying in a an external container to provide an implementation.

jacek99 commented 9 years ago

I think that would be OK. Dropwizard is also Jetty based, so it should be trivial to do this.

jgangemi commented 9 years ago

yeah - pulling these apart is anything but trival.

jacek99 commented 9 years ago

I really appreciate your efforts. I have been playing with the current version of Spark and it is a joy...we just could never deploy it in production without all the bells and whistless we have in our current Dropwizard apps...

jgangemi commented 9 years ago

welcome - for sure some additional thought needs to go into this. i understand @perwendel's desire to keep things simple so users can get up and running quickly, but i'm not entirely sure that's possible w/o requiring the user make some change.

i think what may be a good idea is to have spark core be based on the work done in #167 (which is what i'm already basing these changes on) and pull the static interface out into a separate module. then all the end user has to do is change the dependency to 'spark-static' (??) and everything will continue to work as before.

this will probably work a lot better then the original idea i had to reflectively look up a default server name that will be contained in the jetty war and throw an error if it's not found on the classpath (and you didn't tell spark to use some other server builder).

although, now that i think about it, this will still be a problem for ppl wanting to use the 'instance' api b/c something will need to tell spark-core what server is driving it, but perhaps that's not the worst thing in the world and if you're using the 'instance' api, you're already prepared for a couple extra steps.

kliakos commented 9 years ago

I don't understand the whole passion about the idea of "get up and running quickly". If the target group of this framework is "HelloWorld" applications, then yes, simplicity is the key. But if we are talking about real world production applications, the the developer will go so deap into configuration and parameterization with all the other java libraries and frameworks, that the sparkjava configuration will be only be the tip of the iceberg.

jgangemi commented 9 years ago

i think that's all dependent on what other libraries and frameworks are included w/ what they are building.

my plan is to use spark to build an application b/c it's relatively simple and many of the other libraries i am using don't require it (one of the reasons i am using them) so if the idea is to rapidly prototype something, spark is a big winner in that category.

at the same time, the whole use of functions to define routes (yeah, the same thing could be achieve thru anon interfaces) isn't really practical in a production application. i don't necessary want to fire up an embedded server to test my app every time i make a change. all the routes i am writing are their own classes to they can be unit-tested via mocks, etc.

i'm fairly certain splitting spark into smaller modules, including separating the static interface into it's own module will allow the flexibility and simplicity that is desired. if you already have to update a version number to get a later version, it won't be asking too much more to change the artifact name if you want to continue using the 'easy' interface.

jkwatson commented 9 years ago

I'd just like to chime in here. We are using SparkJava in production in a micro-service architecture. Our ops team appreciates that we can spin up services on hosts in AWS with a shell script that kicks off a main method. We hook up New Relic to these services, and they're as "production worthy" as you could ever hope to be. We don't want big containers; we don't want to have to use tomcat. The simplicity and leanness of SparkJava with embedded Jetty is a big win for us.

That being said, I'd also love to see SparkJava integrated into DropWizard, because I don't want to use Jersey for serving up simple json microservices (or anything at all, if it comes right down to it).

jgangemi commented 9 years ago

ok - i split things up into the following:

spark-core spark-jetty spark-servlet spark-static

you can follow progression here: https://github.com/jgangemi/spark/tree/server

jgangemi commented 9 years ago

update: i've managed to split jetty apart from the core. i still need to fix a couple things there and then deal w/ the servlet and static parts, but it's coming along.

jacek99 commented 9 years ago

thank you, much appreciated. This should go a long way to help spread Spark in different stacks.

On Fri, Oct 17, 2014 at 10:22 PM, Jae Gangemi notifications@github.com wrote:

update: i've managed to split jetty apart from the core. i still need to fix a couple things there and then deal w/ the servlet and static parts, but it's coming along.

— Reply to this email directly or view it on GitHub https://github.com/perwendel/spark/issues/137#issuecomment-59597390.

jgangemi commented 9 years ago

i'm not sure where @perwendel is on all of this so i've deployed a snapshot build of this to my nexus repository that can be tried out. the servlet filter still doesn't work but if you just care about the embedded server, you're ok.

<dependency>
  <groupId>org.scriptkitty</groupId>
  <artifactId>spark-static</artifactId>
  <version>2.1.0-SNAPSHOT</version>
</dependency>

if you just want the core, use spark-core and if you just want the core and jetty, use spark-jetty

in addition to splitting out jetty and the static interface, there has been a significant refactoring of the internals and some of the other requests (such as allowing the RouteMatcher to be plugable have been implemented). with the exception of the static interface i did not make an efforts to make non breaking api changes. given the project requires java8, i doubt this is going to really affect anyone but given the confusion i (and some others looking at the open issues) had, i felt it made sense to rename some things.

also - with the exception of the ip, port, and ssl configuration, if you want specify a route mapper, different embedded server, etc you have to use the builder to get a spark instance, which i think is a fair tradeoff.

aplatypus commented 9 years ago

Hi ...

I definitely support this, I currently have an problem that would be resolved if I could plug-in Jetty v9.2 (instead of Jetty v9.0).

Also ... it is important in this day and age to consider different transport types (e.g. something like ActiveMQ, or sockets).

I think it's worth keeping it simple /avoiding sophistication by using an interface to establish a services or bus structure.

w.

mucaho commented 9 years ago

@perwendel wrote:

However, breaking out Jetty would [...] introduce additional steps to get Spark working. How can this be avoided since I'm reluctant to any changes that add extra steps for the user?

By providing an additional default implementation where Spark is already connected and setup with Jetty as it is right now (maybe as a separate maven artifact, which depends on the "spark-core").

@jgangemi Good work, keep us updated! I would also like to see this happening.

jgangemi commented 9 years ago

i've already done that. you can use the static instance the same as always. using the builder requires you to specify a server implementation as it should.

i have not gone back and revisited the servlet side of things yet. i think if you want to go that route, you should treat things like a real webapp and put static content in WEB-INF, etc. but after looking into why a filter was used instead of a servlet, if you have all your if you define everything under / vs putting your rest calls under /api, there's going to be now way to serve your static content - well, that's not true, the servlet could do the same thing the filter does currently. the whole point is i think that vastly over-complicates thing when the webapp will serve static content for you, so should the two be portable? as i said, this is less of an issue if you have some way to differentiate rest routes vs static content, but still.

having said that, the snaphot i pushed to my nexus repo lets you swap out embedded containers. it also runs on the latest version of jetty, which may mean it's ready to go for you out of the box.

i need to push another build to nexus as i fixed a few things which i will do momentarily.

jgangemi commented 9 years ago

pushed

travisspencer commented 9 years ago

+1 on this. We really need this functionality as well.

007lva commented 9 years ago

+1

jgangemi commented 9 years ago

at one time when i thought i was going to use spark for a project i did this work (and also refactored a ton of the spark code base at the same time). i ended up using jersey and haven't had the time to do anything this else w/ it.

if someone wants to pick up and run w/ it, you can find id here: https://github.com/jgangemi/spark/tree/server

the one thing i never got working was letting spark run in tomcat - that code still needs to refactored to break dependencies there.

pazhapn commented 9 years ago

+1 plenty of opportunities if spark becomes server independent

jaguililla commented 9 years ago

I forked this project some time ago to do that (among other things) and I have a working Undertow backend. You can check it here: https://github.com/jamming/sabina

gencube commented 9 years ago

+3 YES!! Please use pattern shown here also. https://github.com/perwendel/spark/issues/288

beihaifeiwu commented 9 years ago

+1, It will be awsome

perwendel commented 8 years ago

This is in the backlog.

townsendmerino commented 8 years ago

+1

mpricope commented 8 years ago

+1 ... another very useful thing out of this will be to be able to configure a little bit the jetty webserver.

One exceptional annoying problem (that happens when you run jetty on Windows) is that by default Jetty locks your static files and thus you can't work with them if server is running.

The solution is to change some jetty defaults in the jetty ResourceHandler.

Currently, because Spark is so deeply coupled with Jetty you just can't do that.

I did picked Spark for simplicity. But if it doesn't offer also some flexibility for those who want to pay the price :) ... then you just can't get around problems like the above.

cleankod commented 7 years ago

Almost 3 years later - any news on this one? I would like to run the application on the Undertow server.

jakaarl commented 7 years ago

I submitted PR #674 for separating Jetty from the "core" Spark quite a while ago. I'll take a look at how sour the changes have gone, hopefully can resolve conflicts without a huge effort. EDIT: closed and made a new PR, #781 .

ctran commented 7 years ago

Should old issues be closed if actions are not taken?

tipsy commented 7 years ago

That's a good question.. If we close them, they're going to show up again at some point, but without all the existing comments. I've been keeping most issues/PRs that may happen open for that reason.

jaguililla commented 7 years ago

I forked the project some time ago with this feature as one of its goals... You can check it here, now it is unmaintained as I'm working in a pure Kotlin framework which includes this feature... You can check it here... Sorry for the advertisement 😁