dfleury2 / beauty

A Simple C++ Http server/client above Boost.Beast
MIT License
188 stars 23 forks source link

Add support for big files: file_body instead of string_body type for files bigger to use serializer #17

Closed namespacedigital closed 1 year ago

dfleury2 commented 1 year ago

should be done on the client side only, right ?

So, it can write the body response directly in specified file without consumong to much memory ?

namespacedigital commented 1 year ago

should be done on the client side only, right ?

So, it can write the body response directly in specified file without consumong to much memory ?

Should be done on server side, the client it will receive small chunks of data instead of the whole file. Adding alsobeast::http::response<beast::http::file_body>::response; ... for server file response should fix the memory issue.

dfleury2 commented 1 year ago

The "download by chunk" could be done without changing the server. It will be a pain a make it clean and well configurable.

Do you have a use case, because I am not sure why this is the responsability of the server (and not the data storage for example). You can easily implement the chunk download in the server (and make your own choices to configure it), but it depends where the "lot of data" came from.

namespacedigital commented 1 year ago

My usecase is when I have let' say an API server that serves JSON response for example, string_body response from beast is ok because there is no file and the response is generated dynamically, on the other hand if I want to also serve the content of the file on the UI lets' say index.html with the current implementation of the server you need to read all the file in memory with fstream and then send it all at once with async_write, you can imagine if you have 1mb file and several also javascript files loaded in index html and 1000 clients connected, the memory used will be very high and also you have to wait to read all the file in memory and not serving right away.

I don't mean download by chunk but to use file_body on server when serving files because what it does is reading the content in buffers of 4096 and it's calling async write for each chunk until all content is served.

I modified a little bit the code for response.hpp with file_body instead of string_body and commented the logic with exception handling and swagger because they expect string and is correct because they need to be string_body. And for example when the client is using the beauty library instead of using fstream res.body() = content file you only call res.body().open(filepath,..)

Some ex in chat server from main page asio beast https://www.boost.org/doc/libs/1_81_0/libs/beast/example/websocket/server/chat-multi/http_session.cpp

And

https://www.boost.org/doc/libs/1_81_0/libs/beast/example/http/server/fast/http_server_fast.cpp

Another usecase can be streaming content.

I can open a pull request with what I made locally it will more clear that way.

dfleury2 commented 1 year ago

Ok I understood.

It seems that I do not use Beauty (server) this way, and only use the dynamic part. Most of the time, I am using a real webserver for static pages (files) (like nginx), and forward some part of HTTP request to Beauty (the dynamic ones). And for the multiple clients (on static side), I am us ing a cache like Varnish. But, little complex stack indeed.

Yes you can create the PR, I will check it and see own I can use it. My concern is about the breaking changes it could create, and how to configure it on all cases (static or dynamic) easily.

Thanks for the feedback.