Open YJMLWZ opened 1 year ago
Hi @YJMLWZ, thanks for the report. Would you be able to identify where and how the big memory usage is happening?
8x copies? that is surprising. massif
is cool. However, it messes up the threading model. Try heaptrack
(I recently started using this, but not with a pistachio project). https://github.com/KDE/heaptrack
@YJMLWZ, can you try updating your Pistache to master and let us know if the problem still arises? A lot has changed in the code base since July 2020.
hi, after upgrade to version 0.0.3.20220425, situation is much better. I collected a latest report and pasted below. I'm using pistache server to receive file from client, file size is around 20MB.
two suggestions:
1, Release the buffer earlier after ParserBase::parse() (http.cc:544) been invoked? not sure whether this is possible.
2, can we avoid the copy of request happened in
route(Pistache::Http::Request const&, Pistache::Http::ResponseWriter) (router.cc:495)
auto req = request;
valgrind mass_if report: ->26.97% (33,554,432B) 0xF4E637: allocate (new_allocator.h:137) | ->26.97% (33,554,432B) 0xF4E637: allocate (alloc_traits.h:464) | ->26.97% (33,554,432B) 0xF4E637: _M_allocate (stl_vector.h:378) | ->26.97% (33,554,432B) 0xF4E637: void std::vector<char, std::allocator |
->26.97% (33,554,432B) 0x166C87F: push_back (stl_vector.h:1287) | ->26.97% (33,554,432B) 0x166C87F: operator= (stl_iterator.h:735) | ->26.97% (33,554,432B) 0x166C87F: __copy_m<char const*, std::back_insert_iterator<std::vector |
->26.97% (33,554,432B) 0x166C87F: __copy_move_a2<false, char const*, std::back_insert_iterator<std::vector |
->26.97% (33,554,432B) 0x166C87F: __copy_move_a1<false, char const*, std::back_insert_iterator<std::vector |
->26.97% (33,554,432B) 0x166C87F: __copy_move_a<false, char const*, std::back_insert_iterator<std::vector |
->26.97% (33,554,432B) 0x166C87F: copy<char const*, std::back_insert_iterator<std::vector |
->26.97% (33,554,432B) 0x166C87F: feed (stream.h:111) | ->26.97% (33,554,432B) 0x166C87F: feed (stream.h:103) | ->26.97% (33,554,432B) 0x166C87F: Pistache::Http::Private::ParserBase::feed(char const*, unsigned long) (http.cc:557) | ->26.97% (33,554,432B) 0x166CFF3: Pistache::Http::Handler::onInput(char const*, unsigned long, std::shared_ptr |
->26.97% (33,554,432B) 0x1691577: Pistache::Tcp::Transport::handleIncoming(std::shared_ptr |
->26.97% (33,554,432B) 0x1694BE4: Pistache::Tcp::Transport::onReady(Pistache::Aio::FdSet const&) (transport.cc:103) | ->26.97% (33,554,432B) 0x16C4378: Pistache::Aio::SyncImpl::handleFds(std::vector<Pistache::Polling::Event, std::allocator |
->26.97% (33,554,432B) 0x16C465E: Pistache::Aio::SyncImpl::runOnce() (reactor.cc:165) | ->26.97% (33,554,432B) 0x16C28B9: run (reactor.cc:177) | ->26.97% (33,554,432B) 0x16C28B9: operator() (reactor.cc:515) | ->26.97% (33,554,432B) 0x16C28B9: __invoke_impl<void, Pistache::Aio::AsyncImpl::Worker::run()::<lambda()> > (invoke.h:61) | ->26.97% (33,554,432B) 0x16C28B9: __invoke<Pistache::Aio::AsyncImpl::Worker::run()::<lambda()> > (invoke.h:96) | ->26.97% (33,554,432B) 0x16C28B9: _M_invoke<0> (std_thread.h:252) | ->26.97% (33,554,432B) 0x16C28B9: operator() (std_thread.h:259) | ->26.97% (33,554,432B) 0x16C28B9: std::thread::_State_impl<std::thread::_Invoker<std::tuple<Pistache::Aio::AsyncImpl::Worker::run()::{lambda() | ->26.97% (33,554,432B) 0xD1E1B42: ??? (in usr/lib64/libstdc++.so.6.0.30) | ->26.97% (33,554,432B) 0xD4C509B: ??? (in lib64/libc.so.6) | ->26.97% (33,554,432B) 0xD545CA3: clone (in lib64/libc.so.6) |
---|
->15.59% (19,392,168B) 0xD251184: std::__cxx11::basic_string<char, std::char_traits |
->15.59% (19,391,142B) 0x1665227: Pistache::Http::Private::BodyStep::parseContentLength(Pistache::StreamCursor&, std::shared_ptr |
->15.59% (19,391,142B) 0x1666F5C: Pistache::Http::Private::BodyStep::apply(Pistache::StreamCursor&) (http.cc:393) | ->15.59% (19,391,142B) 0x1665641: Pistache::Http::Private::ParserBase::parse() (http.cc:544) | ->15.59% (19,391,142B) 0x166D014: Pistache::Http::Handler::onInput(char const*, unsigned long, std::shared_ptr |
->15.59% (19,391,142B) 0x1691577: Pistache::Tcp::Transport::handleIncoming(std::shared_ptr |
->15.59% (19,391,142B) 0x1694BE4: Pistache::Tcp::Transport::onReady(Pistache::Aio::FdSet const&) (transport.cc:103) | ->15.59% (19,391,142B) 0x16C4378: Pistache::Aio::SyncImpl::handleFds(std::vector<Pistache::Polling::Event, std::allocator |
->15.59% (19,391,142B) 0x16C465E: Pistache::Aio::SyncImpl::runOnce() (reactor.cc:165) | ->15.59% (19,391,142B) 0x16C28B9: run (reactor.cc:177) | ->15.59% (19,391,142B) 0x16C28B9: operator() (reactor.cc:515) | ->15.59% (19,391,142B) 0x16C28B9: __invoke_impl<void, Pistache::Aio::AsyncImpl::Worker::run()::<lambda()> > (invoke.h:61) | ->15.59% (19,391,142B) 0x16C28B9: __invoke<Pistache::Aio::AsyncImpl::Worker::run()::<lambda()> > (invoke.h:96) | ->15.59% (19,391,142B) 0x16C28B9: _M_invoke<0> (std_thread.h:252) | ->15.59% (19,391,142B) 0x16C28B9: operator() (std_thread.h:259) | ->15.59% (19,391,142B) 0x16C28B9: std::thread::_State_impl<std::thread::_Invoker<std::tuple<Pistache::Aio::AsyncImpl::Worker::run()::{lambda() | ->15.59% (19,391,142B) 0xD1E1B42: ??? (in usr/lib64/libstdc++.so.6.0.30) | ->15.59% (19,391,142B) 0xD4C509B: ??? (in lib64/libc.so.6) | ->15.59% (19,391,142B) 0xD545CA3: clone (in lib64/libc.so.6) | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
->00.00% (1,026B) in 1+ places, all below ms_print's threshold (01.00%) | |||||||||||||||||||||||||||||||||||
->15.59% (19,391,296B) 0x16A84FB: void std::cxx11::basic_string<char, std::char_traits
@kiplingw @Tachi107
hi, we use pistache 20200708 as HTTP server, when it received a file , let's say the file size is 100MB, we observed the peak memory usage of our app grows rapidly to around 1GB. After checking by valgrind --mass_if, seems pistache copy the file content 8 or 9 times. want to ask: is this a known 'issue'? any refinement regard this in newer version? I see string_view been used but not sure.