Open perwendel opened 5 years ago
Please consider reopening the issues labeled Fix in version 3
: They were closed two years ago without actually being resolved.
@fwgreen I'll go through them and check if there are any that shouldn't have been closed.
Native support for uploaded files instead of getting the raw request.
Multiple static file locations
Would it be possible to add internal metrics (usages, perfs, custom) to answer my personal needs of control :) More seriously, in a world of containers, metrics are mandatory to monitor services. Maybe a look at microprofile-metrics and their annotations could inspire developers ? A /metrics output with a prometheus format (anyway something standard) would be a must ;) I clearly understand the need to KISS, and not using annotation make sens, but having a easy way to declare metric would be a killer feature (config file, fluent API extension of get(), post() etc... ?).
@RyanSusana I was wondering the same for static files (https://github.com/perwendel/spark/issues/568)
Could you explain more about your use case?
@johnnybigoode Well I would like to have one Spark instance to be able to hook on various staticfile locations.
One for the JS/CSS and one for /uploads or something
This would allow me to split my application up better.
For my specific use-case: I am developing a CMS framework and the Admin UI has it's own static resources. I would like my framework users to be able to hook on their own staticfiles.
Right now how I solve it, is that I traverse the classpath/jar and add a route for every file I have
I have two ideas and if there is interest, I could try to provide pull requests.
1) Enhance testability In order to test routes and their output it is currently required to change the way how you declare routes. Actually you cannot test routing in combination with testing the output. If we change the Service to implement an interface and allow to swap it Spark.enableMock() This allows to write tests as demod here: https://github.com/perwendel/spark/issues/1085
2) Allow to decorate response and answer If I could decorate a response with a custom class extending the response, I could add behaviour and implement routes more elegant.
Once somewhere
Spark.decorateResponse((response) -> return MySuperDuperResponse(response));
In your routes
Spark.get("sample", (request, response) -> {
return response.json(loadWhatever()).httpOk404IfNull();
});
@RyanSusana @mcgivrer @laliluna Good suggestions. We'll evaluate! Some of them will likely be part of 3.0.
Two big things on my wish list: break apart core and jetty, to allow for other embeddable servers (#137), and leverage servlet vs filter (#193).
CSRF Tokens would be a nice simple feature. I use them for Single Page Web Apps, storing them in sessions. Normally, in other languages, there are standalone libraries or packages that provide this functionality to be used with any framework. In the Java world, CSRF tokens are either already integrated into other frameworks (Spring Security for example) or are part of old packages that are no longer being maintained or have more complex configurations in XML that, frankly, I don't understand how to set up. Do you think this is something that could be added in? or do you happen to know of a library that I can pick up that has little to no configuration and is standalone? I tried searching Maven Central but no luck.
Request: Method to respond with a File
One thing that might be useful is the option to use Jax-RS style annotations on routes. This way, instead of reaching into the request object and grabbing seemingly random fields, you can define the expected inputs via annotations.
If there's any interest in this, we've already developed something we use internally. I could spin it out into a PR easily!
A big thing that would be nice to have is OpenApi/Swagger support, or a plugin/maven package to add support. Most frameworks out there have this to autogenerate open api specs and have swagger UI integrated, it makes testing and auto generating interfaces from the spec for your api's really awesome!
Jax-RS style annotations
Note that I have done a APT based code generation project for Javalin and would look to do the same for Spark. The Javalin one is documented at: https://dinject.io/docs/javalin/ ... I just need to adapt the code generation for Spark request/response.
OpenApi/Swagger support,
As part of the APT code generation for controllers it also generates OpenApi/Swagger docs. The nice thing here is that APT has access to javadoc/kotlindoc so actually we just javadoc our controller methods and that goes into the generated swagger.
This approach is more similar to the jax-rs style with dependency injection and controllers. Note that the DI also uses APT code generation so it is fast and light (but people could swap it out for slower heavier DI like Guice or Spring if they wanted to).
A way for the get
post
and other methods alike to listen for requests with a specific host parameter. Something like
Spark.get("/", "test.example.com", (request, response) -> {
return "Hello!";
}
Thanks everyone for your suggestions. It's been a long summer vacation with a resulting dip in project activity. Ramping up will begin within a month!
I am just now working on my first project with Spark and I like its minimalism, as time goes I will probably find more things, but these are some features I found missing early in development:
staticFiles.externalLocation("resources", "static");
would result in /static/*
serving files from resources
These are not deal-breakers, so I continue development and it's really good so far. However, I would like to add my two cents in matter of support of multiple HTTP server solutions: in my (maybe not so popular) opinion Spark should handle just one HTTP server very well, because - well it is literally just HTTP server, let's not make it more complicated than it is.
Please add option to disable GZip in staticFiles response.
Allow other embeddable servers will be great!!
A little late to the game but here are some improvements I'd like to suggest. I ran into this hurdles when I used sparkjava to implement a basic REST service that only had a few endpoints. Overall experience was great and I loved the simplicity of sparkjava.
Spark.halt
or throwing a HaltException
. Ideally I could just return a response in the before filter but the handler returns void
so that's not possible. The other downside is since we are using OAuth 2, according to RFC6750, we must return a WWW-Authenticate
response header. But the halt
and HaltException
solution doesn't allow me to set a response header. So I had to resort to throwing an exception and using an error handler to catch that. It all worked but in a codebase where we are trying to avoid side effects and be more functional it felt dirty.That's about it. Appreciate all the hard work and if these suggestions sound interesting I think I'd be able to submit some patches if given some direction.
An option to disable automatic gzip compression based on the presence of a Content-Type: gzip
response header would be extremely useful. To wax philosophical for a second, I'm generally opposed to magic in frameworks. This is one of the reasons I gravitated toward Spark in the first place: It's thin, transparent, and almost entirely free of magic. Except for this feature which has no opt-out or clean workaround of any kind. Example use case: I have an endpoint that serves as an authenticated gateway to resources in S3. These resources are gzipped for good reason (consume less storage and less data over the wire). If I want to stream these resources I'm forced to wrap the InputStream in a GZIPInputStream, otherwise Spark will forcibly zip my resource twice when I include the relevant HTTP header.
@skedastik
I ran into that same issue TODAY. How did you solve it?
@RyanSusana I posted my (grotesque) workaround on Stack Overflow.
plugin system like in javalin will make spark extensible. Then creating plugin for common task like
GraphQL would be very nice
A response type transformer as I've written in detail in #1181
Will 3.0 be release?
What about http2 support according to pr #1183 ? Also is there some release plan for 3.0 ?
What about http2 support according to pr #1183 ? Also is there some release plan for 3.0 ?
Already implemented in the Unofficial Build among with other features. As far as I know @perwendel is planning to come back and keep going with this project, but meanwhile, I'm merging and fixing what I can in that repository.
Hi, A 2.9.0 release will be done shortly and after that my work will be fully focused on 3.0. Any input on what would be fitting for Spark 3.0 is much appreciated. Please post in this thread. Thanks!