Closed clement128 closed 3 years ago
@clement128 could you try using ConfigureAwait(false)
across the board?
Hi @kblok , I am running with ASP.NET Core, base on my understanding I don't need to use ConfigureAwait(false)
since there is not any SynchronizationContext
by default? Correct me if I am wrong.
That's true @clement128.
@kblok any updates on this issue.
Sorry @clement128 @pmdevers I will take a look tomorrow.
@clement128 @pmdevers could you help me break this project https://github.com/kblok/PuppeteerSharpOnAspNetCoreDemo?
That project is working for me.
Issue of zombie processes is resolved when switching to netcore 3.1 Issue was on net core 2.2
see project is used https://github.com/pmdevers/PuppteerSharp-Docker
Hi @kblok, are you run it on docker?
@pmdevers hmm interesting, I was running on netcore 3.1 also. with Orleans.
Upgraded the project with the issue to 3.1 and still having zombie processes. It looks like the project is locking the chrome processess.
https://github.com/puppeteer/puppeteer/issues/1825
Have found that the following arguments resolved the zombie processes
Args = new[] { "--headless", "--no-sandbox", "--disable-gpu", "--single-process", "--no-zygote" }
also added this https://github.com/Yelp/dumb-init
Just a note to say before I attempt the workarounds people have listed that this is definitely still an issue "out of the box" with PuppeteerSharp 2.0.4 running in a .net core webapi server on Docker / Alpine linux.
I have a report printing setup and each time I run a report 3 new zombie chrome processes are added and never go away but rather build up over and over even with proper disposal inside a using() block. Doesn't appear to be an issue on Windows at all.
Any news on this? I'm also seeing 3-4 processes hanging on Linux
i seemingly got this working by ditching alpine and using the args mentioned above. i'm not sure how to integrate the dumb-init
bit so not doing that currently.
i'm also doing a flavor of this: https://github.com/hardkoded/puppeteer-sharp/issues/996#issuecomment-471521462 except instead i'm grabbing the process early and holding on to it, then doing a refresh before checking if i need to kill it. doing it that way because the pid lookup throws an exception if it's already been terminated on linux.
still testing things, but fingers crossed that things keep behaving from here...
@pmdevers I thought I was in the clear but still having the issue. Did you solve for this? I looked at your pdf repo but don't see you using dumb init in there. Was trying to figure out how to integrate that and not coming up with something that felt like it made sense
@mrtristan no after some investigation i did not need the dumb-init this is the snipped code.
`
browserFetcher = new BrowserFetcher();
_options = new LaunchOptions
{
Headless = true,
ExecutablePath = "/usr/bin/google-chrome",
Args = new[] {
"--headless",
"--no-sandbox",
"--disable-gpu",
"--single-process",
"--no-zygote"
}
};
await _browserFetcher.DownloadAsync(BrowserFetcher.DefaultRevision);
using (var browser = await Puppeteer.LaunchAsync(_options))
using (var page = await browser.NewPageAsync())
{
await page.SetRequestInterceptionAsync(true);
page.Request += async (sender, e) =>
{
var header = e.Request.Headers;
header.Add("Authorization", _userService.GetToken());
var payload = new Payload()
{
Url = e.Request.Url,
Method = e.Request.Method,
Headers = header,
};
await e.Request.ContinueAsync(payload);
};
await page.GoToAsync($"{url}/{query.DossierId}/{query.Version}",
new NavigationOptions {WaitUntil = new[] {WaitUntilNavigation.Networkidle0}});
var data = await page.PdfDataAsync(new PdfOptions {Format = PaperFormat.A4});
`
We use the mcr.microsoft.com/dotnet/core/aspnet:3.1 as base image
This what i used in the Docker file to install google chrome
RUN apt-get update && apt-get install -y \ apt-transport-https \ curl \ gnupg \ --no-install-recommends \ && curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \ && echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list \ && apt-get update && apt-get install -y \ google-chrome-stable \ --no-install-recommends \ && apt-get purge --auto-remove -y curl \ && rm -rf /var/lib/apt/lists/*
And because we use kubernetes we use de following resource limits
resources: limits: memory: "1.5Gi" cpu: "500m"
This is running in production for over 6 months without any issues
@pmdevers interesting. thanks for the quick response. i seem very closely aligned to your approach but still choking. will dig in further. i'm running things on AWS ECS fargate with mostly same resource limits, just an even 1G on memory. i wonder if the environmental differences are having some bearing on things.
one thing worth calling out is that it seems like you're preinstalling chrome but you're still calling await _browserFetcher.DownloadAsync(BrowserFetcher.DefaultRevision);
which is likely duplicative. i wonder if that's relevant to things working for you in some way.
@mrtristan the browserfetcher is for local development as we set the ExecutablePath to _browserFetcher.GetExecutablePath(BrowserFetcher.DefaultRevision) and add remove "--single-process" argument if in debug mode i believe the browserfetcher only downloads if the ececutionpath does not exists
@pmdevers gotcha.
I think i'm down to my issue being environmental. I'm able to debug and run it locally on windows, it also functions fine in my local (linux) docker container. once i push to amazon and try and run it up there in Fargate is when i seemingly randomly get it to work or not, with the scales tipped towards usually not. extremely frustrating.
seems like it's largely due to running in a constrained environment. i have to figure out how to impose those limitations locally to see if i can break things i guess.
@pmdevers if you're able, can you see if that still works on a fresh docker build? i was fine until today when a new chrome version hit the stable channel (appreciate your help getting to this point). getting an immediate failure now. curious if it's me.
i pinned the installed version to 88.0.4324.182-1
(the last stable release from earlier last month) and now it's back to working.
i've also tracked the new version's issue down to being with the single-process
flag.
Unhandled exception. PuppeteerSharp.ProcessException: Failed to create connection
---> System.Net.WebSockets.WebSocketException (0x80004005): Unable to connect to the remote server
---> System.Net.Http.HttpRequestException: Connection refused
---> System.Net.Sockets.SocketException (111): Connection refused
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean allowHttp2, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttpConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.WebSockets.WebSocketHandle.ConnectAsyncCore(Uri uri, CancellationToken cancellationToken, ClientWebSocketOptions options)
at System.Net.WebSockets.WebSocketHandle.ConnectAsyncCore(Uri uri, CancellationToken cancellationToken, ClientWebSocketOptions options)
at System.Net.WebSockets.ClientWebSocket.ConnectAsyncCore(Uri uri, CancellationToken cancellationToken)
at PuppeteerSharp.Transport.WebSocketTransport.CreateDefaultWebSocket(Uri url, IConnectionOptions options, CancellationToken cancellationToken)
at PuppeteerSharp.Transport.WebSocketTransport.CreateDefaultTransport(Uri url, IConnectionOptions connectionOptions, CancellationToken cancellationToken)
at PuppeteerSharp.Connection.Create(String url, IConnectionOptions connectionOptions, ILoggerFactory loggerFactory, CancellationToken cancellationToken)
at PuppeteerSharp.Launcher.LaunchAsync(LaunchOptions options, Product product)
--- End of inner exception stack trace ---
at PuppeteerSharp.Launcher.LaunchAsync(LaunchOptions options, Product product)
at PuppeteerSharp.Launcher.LaunchAsync(LaunchOptions options, Product product)
at xxx.PuppeteerServiceTemp.ExecuteWithPageAsync[T](Func`2 callback, Nullable`1 timeoutMilliseconds) in /app/PuppeteerServiceTemp.cs:line 26
at xxx.Program.Main(String[] args) in /app/Program.cs:line 128
Closed due to inactivity. Feel free to reopen it if needed.
still exist this case in new version in puppeteer 8
Before you file a bug, have you:
Tried upgrading to newest version of PuppeteerSharp, to see if your issue has already been resolved and released? Yes, I am using the latest version.
Checked existing open and closed issues, to see if the issue has already been reported? Yes, I do search quite a lots and also try to find solution from the original puppeteer repo also.
Tried reproducing your problem in a new isolated project? Not yet, I might try next time but since it's a production issue, so I need to fix it quickly.
Considered if this is a general question and not a bug?. For general questions please use StackOverflow. Not sure but after I change to use
Dispose()
seems these chrome process close correctly.Description
I am running puppeteer-sharp with docker, I found there are quite a lot of zombie chrome process never be kill. I was try to use
tini
as entry point (tips from here tips) also no luck. I was check the logs, I found that someDisposeAsync
sometime never complete. So I change to useDispose
instead, looks good so far.Complete minimal example reproducing the issue
There is not complete code right now, I might update later. I just simply to call
DisposeAsync
. E.g.Expected behavior:
Browser dispose correctly, and no zombie process
Actual behavior:
I found some zombie process never be kill
Versions
Which version of PuppeteerSharp are you using?
Version: 2.0.3
Which .NET runtime and version are you targeting? E.g. .NET framework 4.6.1 or .NET Core 2.0.
netcoreapp3.1, aspnet core mvc
Additional Information
Running puppeteer-sharp in docker, dockerfile similar with the one you provided, I just added
tini
as entry-point. (BTW, I think the docker example should also add this and run with args--disable-dev-shm-usage
).