scrapinghub / splash

Lightweight, scriptable browser as a service with an HTTP API
BSD 3-Clause "New" or "Revised" License
4.09k stars 513 forks source link

Can't release memory in splash 3.0 #676

Closed lt82654993 closed 4 years ago

lt82654993 commented 7 years ago

I'm also can't release memory in splash 3.0. Using command “curl -X POST http://localhost:8050/_gc” can collection a part of GC, but still exist and increasing memory with following requests. Then, it will be forced stop .How can I solves it? Thanks very much.

toshunster commented 6 years ago

@lt82654993 there are --restart=always and --rssmax options. docker run -p 8050:8050 -p 5023:5023 --restart=always scrapinghub/splash --max-timeout 90 --slots 3 --maxrss 300

lt82654993 commented 6 years ago

what's means slots ? So far, I don't find it out. The "maxrss" param means that one process will consume 300KB?

toshunster commented 6 years ago

@lt82654993 slots it's

Splash renders requests in parallel, but it doesn’t render them all at the same time - concurrency is limited to a value set at startup using --slots option. When all slots are used a request is put into a queue. The thing is that a timeout starts to tick once Splash receives a request, not when Splash starts to render it. If a request stays in an internal queue for a long time it can timeout even if a website is fast and splash is capable of rendering the website.

maxrss means 300mb for one process.

lt82654993 commented 6 years ago

Thanks a lot !

syncml commented 6 years ago

@lt82654993

Hi, what is the result?

iamsoorena commented 6 years ago

this is serious.

Gallaecio commented 4 years ago

See #674