FineUploader / server-examples

Server-side examples for the Fine Uploader library
https://fineuploader.com
MIT License
279 stars 204 forks source link

Node S3 upload example fails on policy check #28

Closed strada closed 10 years ago

strada commented 10 years ago

https://github.com/Widen/fine-uploader-server/blob/master/nodejs/s3/s3handler.js#L145

This condition where it checks for parsedMaxSize always false in my case.

When I remove it and only check the bucket name, the upload starts to display progress, but I get "The request signature we calculated does not match the signature you provided. Check your key and signing method." 403 header, even though I followed instructions here:

http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/#configure-s3

rnicholus commented 10 years ago

Please include your modified version of the example here so we can look into this further.

On Sunday, March 16, 2014, Mehmet Sukan notifications@github.com wrote:

https://github.com/Widen/fine-uploader-server/blob/master/nodejs/s3/s3handler.js#L145

This condition where it checks for parsedMaxSize always false in my case.

When I remove it and only check the bucket name, the upload starts to display progress, but I get "The request signature we calculated does not match the signature you provided. Check your key and signing method." 403 header, even though I followed instructions here:

http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/#configure-s3

Reply to this email directly or view it on GitHubhttps://github.com/Widen/fine-uploader-server/issues/28 .

strada commented 10 years ago

Client side JS using Fineuploader 4.4.0:

$(document).ready(function () {
    $("#fine-uploader").fineUploaderS3({
        debug: true,
        request: {
            endpoint: '{BUCKETNAME}.s3.amazonaws.com',
            accessKey: '{{AWS Access Key Id}}'
        },
        signature: {
            endpoint: '/s3handler'
        },
        uploadSuccess: {
            endpoint: '/s3handler?success=true'
        },
        iframeSupport: {
            localBlankPagePath: '/success.html'
        },
        retry: {
           enableAuto: true // defaults to false
        },
        deleteFile: {
            enabled: true,
            endpoint: '/s3handler'
        }
    });
});

Server side is pretty much the same except the isPolicyValid function:

var express = require('express'),
    crypto = require('crypto'),
    aws = require('aws-sdk'),
    ejs = require('ejs'),
    app = express(),
    path = require('path'),
    clientSecretKey = '{{AWS Access Key Id}}',
    // These two keys are only needed if you plan on using the AWS SDK
    serverPublicKey = '{{AWS Access Key Id}}',
    serverSecretKey = '{{AWS Secret Access Key}}',
    // Set these two values to match your environment
    expectedBucket = '{BUCKETNAME}',
    expectedMaxSize = 15000000,
    s3;

// Init S3, given your server-side keys.  Only needed if using the AWS SDK.
aws.config.update({
    accessKeyId: serverPublicKey,
    secretAccessKey: serverSecretKey
});
s3 = new aws.S3();

app.set('port', process.env.PORT || 4040);
app.set('views', __dirname + '/views');
app.set('view engine', 'ejs');
app.use(express.favicon());
app.use(express.logger('dev'));
app.use(express.bodyParser());
app.use(express.static(path.join(__dirname, 'public')));
app.listen(app.get('port'));

// Handles all signature requests and the success request FU S3 sends after the file is in S3
// You will need to adjust these paths/conditions based on your setup.
app.post("/s3handler", function(req, res) {
    if (req.query.success !== undefined) {
        verifyFileInS3(req, res);
    }
    else {
        signRequest(req, res);
    }
});

app.get('/form',  function(req, res){
    res.render('index', { title: 'Express' });
});

// Handles the standard DELETE (file) request sent by Fine Uploader S3.
// Omit if you don't want to support this feature.
app.delete("/s3handler/*", function(req, res) {
    deleteFile(req.query.bucket, req.query.key, function(err) {
        if (err) {
            console.log("Problem deleting file: " + err);
            res.status(500);
        }
        res.end();
    });
});

// Signs any requests.  Delegate to a more specific signer based on type of request.
function signRequest(req, res) {
    if (req.body.headers) {
        signRestRequest(req, res);
    }
    else {
        signPolicy(req, res);
    }
}

// Signs multipart (chunked) requests.  Omit if you don't want to support chunking.
function signRestRequest(req, res) {
    var stringToSign = req.body.headers,
        signature = crypto.createHmac("sha1", clientSecretKey)
        .update(stringToSign)
        .digest("base64");

    var jsonResponse = {
        signature: signature
    };

    res.setHeader("Content-Type", "application/json");

    if (isValidRestRequest(stringToSign)) {
        res.end(JSON.stringify(jsonResponse));
    }
    else {
        res.status(400);
        res.end(JSON.stringify({invalid: true}));
    }
}

// Signs "simple" (non-chunked) upload requests.
function signPolicy(req, res) {
    var base64Policy = new Buffer(JSON.stringify(req.body)).toString("base64"),
        signature = crypto.createHmac("sha1", clientSecretKey)
        .update(base64Policy)
        .digest("base64");

    var jsonResponse = {
        policy: base64Policy,
        signature: signature
    };

    console.log(jsonResponse);

    res.setHeader("Content-Type", "application/json");
    if (isPolicyValid(req.body)) {
        res.end(JSON.stringify(jsonResponse));
    }
    else {
        res.status(400);
        res.end(JSON.stringify({invalid: true}));
    }
}

// Ensures the REST request is targeting the correct bucket.
// Omit if you don't want to support chunking.
function isValidRestRequest(headerStr) {
    return new RegExp("\/" + expectedBucket + "\/.+$").exec(headerStr) != null;
}

// Ensures the policy document associated with a "simple" (non-chunked) request is
// targeting the correct bucket and the max-size is as expected.
// Omit the parsedMaxSize-related code if you don't have a max file size.
function isPolicyValid(policy) {
    var bucket, parsedMaxSize;
    policy.conditions.forEach(function(condition) {
        if (condition.bucket) {
            bucket = condition.bucket;
        }
        else if (condition instanceof Array && condition[0] === "content-length-range") {
            parsedMaxSize = condition[0];
        }
    });
    return bucket === expectedBucket;
}

// After the file is in S3, make sure it isn't too big.
// Omit if you don't have a max file size, or add more logic as required.
function verifyFileInS3(req, res) {
    function headReceived(err, data) {
        if (err) {
            res.status(500);
            console.log(err);
            res.end(JSON.stringify({error: "Problem querying S3!"}));
        }
        else if (data.ContentLength > expectedMaxSize) {
            res.status(400);
            res.write(JSON.stringify({error: "Too big!"}));
            deleteFile(req.body.bucket, req.body.key, function(err) {
                if (err) {
                    console.log("Couldn't delete invalid file!");
                }

                res.end();
            });
        }
        else {
            res.end();
        }
    }

    callS3("head", {
        bucket: req.body.bucket,
        key: req.body.key
    }, headReceived);
}

function deleteFile(bucket, key, callback) {
    callS3("delete", {
        bucket: bucket,
        key: key
    }, callback);
}

function callS3(type, spec, callback) {
    s3[type + "Object"]({
        Bucket: spec.bucket,
        Key: spec.key
    }, callback)
}
rnicholus commented 10 years ago

I'll be able to look into this more in Monday.

On Sunday, March 16, 2014, Mehmet Sukan notifications@github.com wrote:

Client side JS using Fineuplaoder 4.4.0:

$(document).ready(function () { $("#fine-uploader").fineUploaderS3({ debug: true, request: { endpoint: '{BUCKETNAME}.s3.amazonaws.com', accessKey: '{{AWS Access Key Id}}' }, signature: { endpoint: '/s3handler' }, uploadSuccess: { endpoint: '/s3handler?success=true' }, iframeSupport: { localBlankPagePath: '/success.html' }, retry: { enableAuto: true // defaults to false }, deleteFile: { enabled: true, endpoint: '/s3handler' } }); });

Server side is pretty much the same except the isPolicyValid function:

var express = require('express'), crypto = require('crypto'), aws = require('aws-sdk'), ejs = require('ejs'), app = express(), path = require('path'), clientSecretKey = '{{AWS Access Key Id}}', // These two keys are only needed if you plan on using the AWS SDK serverPublicKey = '{{AWS Access Key Id}}', serverSecretKey = '{{AWS Secret Access Key}}', // Set these two values to match your environment expectedBucket = '{BUCKETNAME}', expectedMaxSize = 15000000, s3;

// Init S3, given your server-side keys. Only needed if using the AWS SDK. aws.config.update({ accessKeyId: serverPublicKey, secretAccessKey: serverSecretKey }); s3 = new aws.S3();

app.set('port', process.env.PORT || 4040); app.set('views', dirname + '/views'); app.set('view engine', 'ejs'); app.use(express.favicon()); app.use(express.logger('dev')); app.use(express.bodyParser()); app.use(express.static(path.join(dirname, 'public'))); app.listen(app.get('port'));

// Handles all signature requests and the success request FU S3 sends after the file is in S3 // You will need to adjust these paths/conditions based on your setup. app.post("/s3handler", function(req, res) { if (req.query.success !== undefined) { verifyFileInS3(req, res); } else { signRequest(req, res); } });

app.get('/form', function(req, res){ res.render('index', { title: 'Express' }); });

// Handles the standard DELETE (file) request sent by Fine Uploader S3. // Omit if you don't want to support this feature. app.delete("/s3handler/*", function(req, res) { deleteFile(req.query.bucket, req.query.key, function(err) { if (err) { console.log("Problem deleting file: " + err); res.status(500); } res.end(); }); });

// Signs any requests. Delegate to a more specific signer based on type of request. function signRequest(req, res) { if (req.body.headers) { signRestRequest(req, res); } else { signPolicy(req, res); } }

// Signs multipart (chunked) requests. Omit if you don't want to support chunking. function signRestRequest(req, res) { var stringToSign = req.body.headers, signature = crypto.createHmac("sha1", clientSecretKey) .update(stringToSign) .digest("base64");

var jsonResponse = {
    signature: signature
};

res.setHeader("Content-Type", "application/json");

if (isValidRestRequest(stringToSign)) {
    res.end(JSON.stringify(jsonResponse));
}
else {
    res.status(400);
    res.end(JSON.stringify({invalid: true}));
}

}

// Signs "simple" (non-chunked) upload requests. function signPolicy(req, res) { var base64Policy = new Buffer(JSON.stringify(req.body)).toString("base64"), signature = crypto.createHmac("sha1", clientSecretKey) .update(base64Policy) .digest("base64");

var jsonResponse = {
    policy: base64Policy,
    signature: signature
};

console.log(jsonResponse);

res.setHeader("Content-Type", "application/json");
if (isPolicyValid(req.body)) {
    res.end(JSON.stringify(jsonResponse));
}
else {
    res.status(400);
    res.end(JSON.stringify({invalid: true}));
}

}

// Ensures the REST request is targeting the correct bucket. // Omit if you don't want to support chunking. function isValidRestRequest(headerStr) { return new RegExp("\/" + expectedBucket + "\/.+$").exec(headerStr) != null; }

// Ensures the policy document associated with a "simple" (non-chunked) request is // targeting the correct bucket and the max-size is as expected. // Omit the parsedMaxSize-related code if you don't have a max file size. function isPolicyValid(policy) { var bucket, parsedMaxSize; policy.conditions.forEach(function(condition) { if (condition.bucket) { bucket = condition.bucket; } else if (condition instanceof Array && condition[0] === "content-length-range") { parsedMaxSize = condition[0]; } }); return bucket === expectedBucket; }

// After the file is in S3, make sure it isn't too big. // Omit if you don't have a max file size, or add more logic as required. function verifyFileInS3(req, res) { function headReceived(err, data) { if (err) { res.status(500); console.log(err); res.end(JSON.stringify({error: "Problem querying S3!"})); } else if (data.ContentLength > expectedMaxSize) { res.status(400); res.write(JSON.stringify({error: "Too big!"})); deleteFile(req.body.bucket, req.body.key, function(err) { if (err) { console.log("Couldn't delete invalid file!"); }

            res.end();
        });
    }
    else {
        res.end();
    }
}

callS3("head", {
    bucket: req.body.bucket,
    key: req.body.key
}, headReceived);

}

function deleteFile(bucket, key, callback) { callS3("delete", { bucket: bucket, key: key }, callback); }

function callS3(type, spec, callback) { s3[type + "Object"]({ Bucket: spec.bucket, Key: spec.key }, callback) }

Reply to this email directly or view it on GitHubhttps://github.com/Widen/fine-uploader-server/issues/28#issuecomment-37765749 .

feltnerm commented 10 years ago

It looks like the content-length-range header in the policy document is only sent when the size validators are enabled client-side, but the server is always expecting it.

If any of these are not enabled client-side, then parsedMaxSize in isPolicyValid() is undefined because the uploader is not sending any content-length-range. This negates the boolean expression at the return.

It's a bit confusing because the comment says

"Omit the parsedMaxSize-related code if you don't have a max file size."

but the return statement in the function still makes the check.

@strada 's fix is just to not check for the parsedMaxSize at all. I'm not convinced this is the right way to have our examples work.

My fix is to let the user decide whether they want to set the expectedMaxSize in the server, and if it is then validate on that. Otherwise, just validate the policy on the bucket name. I also added expectedMinSize because if a maximum size is sent, then a minimum is sent as well and it seems to make sense to validate both.

rnicholus commented 10 years ago

@feltnerm Your fix looks ok to me. Note that this line:

typeof expectedMinSize !== "undefined" && typeof expectedMaxSize !== 'undefined'

Can be shortened to:

expectedMinSize != null && expectedMaxSize != null

If testing doesn't show any issues with your changes, you can merge this into master in fine-uploader-server.

feltnerm commented 10 years ago

@rnicholus, the shortened line would throw a ReferenceError if expectedMinSize or expectedMinSize was commented out. If I check to see if their 'types' are undefined then I can find out if they have been declared or not.

Thoughts on it being a better option to have the expectedMinSize and expectedMaxSize set as null instead of commented out?

rnicholus commented 10 years ago

I didn't notice the commented out vars. Yes, I Would expect them to not be commented out. Initializing them to null or -1 seems reasonable.

On Mon, Mar 17, 2014 at 11:17 AM, Mark Feltner notifications@github.comwrote:

@rnicholus https://github.com/rnicholus, the shortened line would throw a ReferenceError if expectedMinSize or expectedMinSize was commented out. If I check to see if their 'types' are undefined then I can find out if they have been declared or not.

Thoughts on it being a better option to have the expectedMinSize and expectedMaxSize set as null instead of commented out?

Reply to this email directly or view it on GitHubhttps://github.com/Widen/fine-uploader-server/issues/28#issuecomment-37835435 .