Meteor-Community-Packages / Meteor-CollectionFS

Reactive file manager for Meteor
MIT License
1.05k stars 237 forks source link

ERR_CONTENT_LENGTH_MISMATCH on specific images #690

Open adamgins opened 9 years ago

adamgins commented 9 years ago

HI, I am using collectionsFS on AWS with nginx as reverse proxy. (using docker images)

On some images I am getting ERR_CONTENT_LENGTH_MISMATCH when in an <img tag. If I view the file directly via the CFS URL it seems to show up fine in the browser.

I even experienced the image showing up and then disappearing a split second later.

The browser (chrome) is showing Failed to load resource: ... long CFS url ... net::ERR_CONTENT_LENGTH_MISMATCH followed by Failed to load resource: ... long CFS url ... net::ERR_TIMED_OUT

On Safari I see Failed to load resource: The network connection was lost.

on the server log I see:

May 27 11:22:18 BuzzyDockerTestEnv1-buzzytest1 nginx-proxy:  nginx.1    | 2015/05/27 01:22:18 [error] 24#0: *60 upstream prematurely closed connection while reading upstream, client: <client ip>, server: test.buzzy.buzz, request: "GET <path to CGS image file> HTTP/1.1", upstream: "<long url>", host: "test.buzzy.buzz" 

I do see some similar issues https://github.com/CollectionFS/Meteor-CollectionFS/issues/379 ,
https://github.com/CollectionFS/Meteor-CollectionFS/issues/312 and a few others too. But it seems that a fix was posted, however I am still experiencing it.

Any help appreciated

UPDATE/Additional info: the above error is on chrome. On safari I see : 'Failed to load resource: The network connection was lost.'

This matches with the 113 error that I see on nginx log which seem to say that it cannot be reached via the network... which is kinda strange as the image still shows in the browser. As above, it generally shows the image for a split second and then it disappears, so it seems like it's doing two request... but I am not sure.

If I look at the headers in chrome I see: image

adamgins commented 9 years ago

Hi @aldeed , wondering if any of the above issues (you closed) could help me in solving this specific issue... or perhaps some nginx setting.

I am trying with things like:

keepalive_timeout 75s;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;

No luck yet.

adamgins commented 9 years ago

Some additional information:

When I save an image I save the original, large and thumb (using graphicksMagick (GM)). For one of the examples this problem only seemed to occur on the "thumb" image... ie i could open the largeImage in the browser with no errors. However when I git the thumb version it came up and then a shortwhile (a second or so) the ERR_CONTENT_LENGTH_MISMATCH shows up in the console.

Could there be some sort of corruption of the images GM is creating?

adamgins commented 9 years ago

@raix @aldeed sorry for the direct call out, but just wondering if you have any words of wisdom on how I could debug this issue, please?

Is there any reason CollectionFS would shut down a connection to a file prematurely? It could be a nginx proxy config issue, but from what I have read it seems like it's on the Meteor side (CollectionFS) side. ie nginx is getting the connection cut off. I am just struggling to workout where things are messing up.

For an example, please see this image: https://test.buzzy.buzz/files/files/images/rSpmikbERHQEam4s6/large_image.jpg?token=eyJhdXRoVG9rZW4iOiJwaGNhSzVUWnpwT3l1YlV3dTM1UERSeFQ2aU9EUm5DWTN4a1ppR0txZFN3In0%3D&store=largeImages

if you look at this link in Chrome browser you'll see the following in the console: image

raix commented 9 years ago

might be related to https://github.com/CollectionFS/Meteor-CollectionFS/issues/495

adamgins commented 9 years ago

thanks @raix I don't think it's related to the Chrome bug mention at the end as it happens consistently on Safari and Chrome and it only happens for some images.

I see the work around mentioned to set download=true so would this apply for <img> tags, ie something like <img src="{{this.url store="largeImages" download=true}}" alt=""/> would that make a difference?

Update: " download=true" adding 'download=true' had not impact... the error still persists on certain images/files.

adamgins commented 9 years ago

BTW, here's with CFS debug on:

Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  token: <some token>= 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 nginx-proxy:  nginx.1    | 122.106.240.136 - - [08/Jun/2015:12:41:04 +0000] "GET /files/files/images/rSpmikbERHQEam4s6/large_image.jpg?token=<some token>n0%3D&store=largeImages HTTP/1.1" 416 48 "https://github.com/CollectionFS/Meteor-CollectionFS/issues/690" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36" "-" 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  GET FILERECORD: rSpmikbERHQEam4s6 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  token: <some token> 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  GET FILERECORD: rSpmikbERHQEam4s6 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  createReadStreamForFileKey largeImages 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  createReadStream largeImages 
Jun 08 22:41:04 BuzzyDockerTestEnv1-buzzytest1 buzzytest:  Read file "large_image.jpg" bytes 0-212885/212886 
Jun 08 22:41:09 BuzzyDockerTestEnv1-buzzytest1 nginx-proxy:  nginx.1    | 2015/06/08 12:41:09 [error] 27#0: *38 upstream prematurely closed connection while reading upstream, client: 122.106.240.136, server: test.buzzy.buzz, request: "GET /files/files/images/rSpmikbERHQEam4s6/large_image.jpg?token=eyJhdXRoVG9rZW4iOiJwaGNhSzVUWnpwT3l1YlV3dTM1UERSeFQ2aU9EUm5DWTN4a1ppR0txZFN3In0%3D&store=largeImages HTTP/1.1", upstream: "http://172.17.0.162:80/files/files/images/rSpmikbERHQEam4s6/large_image.jpg?token=<some token>n0%3D&store=largeImages", host: "test.buzzy.buzz", referrer: "https://github.com/CollectionFS/Meteor-CollectionFS/issues/690" 
Jun 08 22:41:09 BuzzyDockerTestEnv1-buzzytest1 nginx-proxy:  nginx.1    | 122.106.240.136 - - [08/Jun/2015:12:41:09 +0000] "GET /files/files/images/rSpmikbERHQEam4s6/large_image.jpg?token=eyJhdXRoVG9rZW4iOiJwaGNhSzVUWnpwT3l1YlV3dTM1UERSeFQ2aU9EUm5DWTN4a1ppR0txZFN3In0%3D&store=largeImages HTTP/1.1" 200 83767 "https://github.com/CollectionFS/Meteor-CollectionFS/issues/690" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36" "-" 
adamgins commented 9 years ago

@raix here's a couple of screenshots. The first is when things are OK... an then #2 is when it's failed a few seconds later. Not the initiator changes from other to document (I am not sure what this means):

Shot #1 (all OK, no error): image

Shot #2 (with the failure, a few seconds later): image

adamgins commented 9 years ago

@raix @aldeed any other ideas on this issue pls, sorry starting to sound desperate ;-) ? could there have been something in the build approx 20 days ago that introduced this issue? Is there an easy way for me to install the previous version?

adamgins commented 9 years ago

Some additional info: if I hit the image directly without a token eg https://test.buzzy.buzz/files/files/images/7ssGJT7XkWkpeYhFo/thumb_IMG_1676.jpg&store=thumbs I cannot see any errors in the browser console.

The image is larger in the browser: image

But if I hit it with a token (ie the one generated by CollectioFS) the error shows up, see https://test.buzzy.buzz/files/files/images/7ssGJT7XkWkpeYhFo/thumb_IMG_1676.jpg?token=eyJhdXRoVG9rZW4iOiJwaGNhSzVUWnpwT3l1YlV3dTM1UERSeFQ2aU9EUm5DWTN4a1ppR0txZFN3In0%3D&store=thumbs

image

The image is smaller in the browser: image

image

perhaps these are not the same images and the larger one is the default image???

Could there be an issue with GraphicsMagic and the transform? Here's an example of the one for thumbs:

 new FS.Store.S3("thumbs", {
            //region: "s3-us-west-2.amazonaws.com", //optional in most cases
            accessKeyId: "<access key>", //required if environment variables are not set
            secretAccessKey: "<secret>", //required if environment variables are not set
            bucket: "<bucketname>", //required
            ACL: "public-read-write", //optional, default is 'private', but you can allow public or secure access routed through your app URL
            beforeWrite: function (fileObj) {
                return {
                    name: 'thumb_' + fileObj.name()

                };
            },

            transformWrite: function(fileObj, readStream, writeStream) {
                try {

                    console.log("creating thumbnail:" +  fileObj.name());
                    gm(readStream, fileObj.name()).resize('400', '400').autoOrient().stream().pipe(writeStream);

                } catch (err){
                    throwError(err)
                }

            }
        }),

Just on additional point, this only happens on my AWS server and not when running Meteor locally, with the same S3 credentials

ghost commented 8 years ago

HI Adamgins, Did you manage to solve these problems? I'm having very similar issues and am still looking .. :-( All advice greatly appreciated! Thanks Mark

adamgins commented 8 years ago

@mnfilius sorry no. I am not using collectionfs anymore

ghost commented 8 years ago

@adamgins

Hi Adam, thanks for that. I was coming to the same conclusion (ie I'll need an alternative). May I ask if and which library you're using instead?

Thanks Mark

adamgins wrote:

@mnfiliushttps://github.com/mnfilius sorry no. I am not using collectionfs anymore

— Reply to this email directly or view it on GitHubhttps://github.com/CollectionFS/Meteor-CollectionFS/issues/690#issuecomment-158793285.

Kind regards Mark Filius E: mark@filius.ccmailto:mark@filius.cc P: +61 (0)7 3848 6039 M: +61 (0)4 1 3456 832 S: markfilius

mitar commented 8 years ago

One crazy question: does your backing service for files support HTTP range requests? I have one other situation where it seems that the problem is that if Chrome tries to do a HTTP range request to resume download and that one is responded with 200 instead of 206, it then stops the connection and retries again with full request. So I am not sure if this is connected, but I am observing this issues with chrome, nginx and lack of HTTP range on my server-side behind nginx.

mitar commented 8 years ago

In my case I solved the issue by increasing the send_timeout in Nginx.

Sojourneer commented 8 years ago

The error of not loading files is not only images, it also happens for text and PDF with cfs:dropbox. It seems to be all small files are affected. The problem is observed on Chrome, Firefox, Safari at least. Something wrong with the chunking...

scsirdx commented 8 years ago

I'm getting this error event on localhost:3000, so no nginx involved. Using gridfs for storage.

Innarticles commented 8 years ago

@scsirdx I'm facing this issue too. Decided to start storing my images on mongo. Are you using S3 for production? If you are, how did you solve the problem?

scsirdx commented 8 years ago

@Innarticles No, i'm not using S3 at all. Got this error while using gridfs. Still having this errors, now temporary using separate folder served by nginx for some images.

ghost commented 8 years ago

Just my 2 cents: I too have stopped using CollectionFS. I now do the upload, manually catch the files and store on disk. In the database I store all meta info.

nickgermaine commented 8 years ago

I was getting this issue, and the fix was so simple: hope it might help others with the same problem.

So, everything was working last time I was in my project (in development on my windows installation). I booted over to Linux to pull the source of cyanogenmod (unrelated), and when I booted into Windows this morning, I started getting this error in my app.

Looking at the cmd console, when I tried to upload an image, I also got this:

at [object Object].<anonymous> (packages/cfs_collection/packages/cfs_collection.js:96:1)
W20160418-12:08:21.687(-3)? (STDERR) Error: Error storing file to the images store: The difference between the request time and the current time is too large.

For some reason, whenever I boot into Windows after booting into Linux, it screws up my time on Windows. I'm not sure why, but I went into the date/time settings and unchecked automatically set time, then rechecked it, so the correct date/time displays on windows, then the problem is gone.

Unlikely this is everyone elses problem, but I figured it might help someone.

ramzauchenna commented 7 years ago

I had the exact same issue and it turned out that i had two app talking to the same database and they both use collectionfs and same aws buckets and media collections. the problem was that they both had the exact same collection fs code and different dimensions. Fixing this solved the issue

mrroot5 commented 7 years ago

In my situation, the problem was nginx's disk space. I had 10GB of logs and when I reduce this amount voila! it works!.

Step by step: I have a docker container with nginx.

  1. Enter in your container: docker exec -it container_id bash.

  2. Go to your logs, for example: /var/nginx/log/.

  3. Show file size: ls -lh for individual file size or du -h for folder size.

  4. Empty file/s with echo "" > file_name.

  5. It works!.

Problem ERR_CONTENT_LENGTH_MISMATCH? :-).

jean343 commented 6 years ago

I am having another explanation for the same problem. I was trying to return a much smaller buffer from createReadStream, for fun, I returned return new Buffer(4).

In Chrome, it would show the size as 4 bytes: image

However, in the headers it would return 30273 bytes. image

I figured that number must come from the DB, and it did. image

For some reasons, the emitted file size in createWriteStream was wrong. In my case, I am using a custom FS Adapter, but I can believe this could be a similar bug with S3.