Closed TrevorHinesley closed 8 months ago
is the error in reading the plain files from the cloud possibly?
On Tue, Feb 9, 2021 at 11:01 AM Trevor Hinesley notifications@github.com wrote:
I'm using Puma with Nginx in front of it, and HAProxy as a load balancer. Pretty standard setup, and zipline works fine most of the time, but lately I've been seeing "Failed - Network error" in Chrome on some zip files. For some users, it's right around the 1GB mark, but I've been able to reproduce it earlier in the download as well. The only thing I'm seeing in Puma's logs is:
zlib(finalizer): the stream was freed prematurely.
But this doesn't seem to happen on every failed download. Any ideas what would cause this? It's not a timeout, because when this error doesn't happen, it can download huge files over the course of 5-10 minutes (even when testing throttled/package-losing connections) no problem.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/fringd/zipline/issues/74, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACCSRMBCRDGXDUMGR42STTS6FL4XANCNFSM4XLIFYPA .
Unfortunately, there's no errors in the application itself and no logging from zipline so I'm unable to tell. Is there a way to enable logging?
Turns out it's somewhere post-nginx, because nginx is throwing this error:
error upstream prematurely closed connection while reading upstream
Possibly a proxy setting of some sort? I don't think it needs an increased timeout since that's a fragile approach.
same with Ruby 3.0.1
Actually it happen when the process of archiving crashes. In my case it tried to archive two files with the same name
okay interesting. was anybody else possibly trying to archive two files with the same name?
I'm suddenly getting this error after upgrading my application to Rails 6.1.7.6 and Ruby 3.0.6.
It was working fine on Rails 6.0.6.1 and Ruby 2.7.8. I rolled back to these versions and it started working again. The puma version was the same in both (5.6.7). Nginx is also the same: nginx/1.14.0 (Ubuntu)
.
I was using zipline 1.3.2, I tried upgrading to 1.5.0 but still having the same problem.
From the user's perspective, the download terminates immediately and the browser's Download list shows "Failed - network error" or "Check internet connection." I've tested in multiple browsers, same result.
The application log does not show any errors, (and actually shows Completed 200 OK
) but Nginx log shows:
upstream prematurely closed connection while reading upstream
This does not appear to be related to the size of the zip file, I see the same behavior even when the zip is < 500K. The application is able to support downloading single files of large sizes (>10MB), so it doesn't appear that the size of the file is causing a problem.
The files being zipped do not have the same names.
@fringd any ideas?
@TrevorHinesley did you ever get this solved?
I'm suddenly getting this error after upgrading my application to Rails 6.1.7.6 and Ruby 3.0.6.
It was working fine on Rails 6.0.6.1 and Ruby 2.7.8. I rolled back to these versions and it started working again. The puma version was the same in both (5.6.7). Nginx is also the same:
nginx/1.14.0 (Ubuntu)
.I was using zipline 1.3.2, I tried upgrading to 1.5.0 but still having the same problem.
From the user's perspective, the download terminates immediately and the browser's Download list shows "Failed - network error" or "Check internet connection." I've tested in multiple browsers, same result.
The application log does not show any errors, (and actually shows
Completed 200 OK
) but Nginx log shows:upstream prematurely closed connection while reading upstream
This does not appear to be related to the size of the zip file, I see the same behavior even when the zip is < 500K. The application is able to support downloading single files of large sizes (>10MB), so it doesn't appear that the size of the file is causing a problem.
The files being zipped do not have the same names.
@fringd any ideas?
@TrevorHinesley did you ever get this solved?
Unfortunately not. Still having this issue.
rails updates things on minor releases. it could be one of the many removed things from the 6.1 release notes... reading through them now https://guides.rubyonrails.org/6_1_release_notes.html https://guides.rubyonrails.org/6_1_release_notes.html
Sent with Shortwave https://www.shortwave.com?utm_medium=email&utm_content=signature&utm_source=ZnJpbmdkQGdtYWlsLmNvbQ==
On Tue Sep 5, 2023, 08:53 PM GMT, Trevor Hinesley @.***> wrote:
I'm suddenly getting this error after upgrading my application to Rails 6.1.7.6 and Ruby 3.0.6. It was working fine on Rails 6.0.6.1 and Ruby 2.7.8. I rolled back to these versions and it started working again. The puma version was the same in both (5.6.7). Nginx is also the same: nginx/1.14.0 (Ubuntu). I was using zipline 1.3.2, I tried upgrading to 1.5.0 but still having the same problem. From the user's perspective, the download terminates immediately and the browser's Download list shows "Failed - network error" or "Check internet connection." I've tested in multiple browsers, same result. The application log does not show any errors, (and actually shows Completed 200 OK) but Nginx log shows: upstream prematurely closed connection while reading upstream This does not appear to be related to the size of the zip file, I see the same behavior even when the zip is < 500K. The application is able to support downloading single files of large sizes (>10MB), so it doesn't appear that the size of the file is causing a problem. The files being zipped do not have the same names. @fringd https://github.com/fringd any ideas? @TrevorHinesley https://github.com/TrevorHinesley did you ever get this solved? Unfortunately not. Still having this issue. — Reply to this email directly, view it on GitHub https://github.com/fringd/zipline/issues/74#issuecomment-1707296310, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACCSRO6XERIZCPCL5UC7YDXY6GMXANCNFSM4XLIFYPA. You are receiving this because you were mentioned.Message ID: @.***>
Interestingly, I was able to resolve the error by changing the way I was constructing the array of file URLs and filenames that is passed as the first argument to the #zipline
function.
Previously, I was calling open-uri
's #open
method on each URL first. Removing that call and just adding the URL as a string works.
However, in the process of debugging the error, it does seem that ActionController::Live
and Zipline
don't play well together, at least in Rails 6.1.7.4. When my controller has both of these modules included, a zipped download request from the browser just hangs and never completes.
Unfortunately I wasn't able to debug what's causing the problem when both of the modules are included in the same controller.
I am also having this issue after upgrading rails. No duplicate file names either. I'm using it with paperclip.
@fringd what could be an option is adding a rescue
to the ZipGenerator
so that at the very lease something gets printed to the Rails log?
@ebenenglish This is tricky
Unfortunately I wasn't able to debug what's causing the problem when both of the modules are included in the same controller.
zipline uses a different way to stream - it streams from your "main" thread (which is a simpler - and easier - Rack-native way of streaming responses). You don't need Live
for streaming to work - as a matter of fact it will make your life more difficult because it will try to offload the serving into a separate thread, all with it's own Current
values, its own database connection etc. etc.
For what zipline does, using Live
should not be necessary.
I'm using Puma with Nginx in front of it, and HAProxy as a load balancer. Pretty standard setup, and zipline works fine most of the time, but lately I've been seeing "Failed - Network error" in Chrome on some zip files. For some users, it's right around the 1GB mark, but I've been able to reproduce it earlier in the download as well. The only thing I'm seeing in Puma's logs is:
But this doesn't seem to happen on every failed download. Any ideas what would cause this? It's not a timeout, because when this error doesn't happen, it can download huge files over the course of 10+ minutes (even when testing throttled/package-losing connections) no problem.