You can test the uploader here!
File
, FileList
, and Blob
objectsWe use Mule Uploader to archive audio in our Rails/AngularJS application www.popuparchive.org. I tried many projects that integrate with S3 in various ways before using this. By using the multipart upload API, multiple threads, and resumable uploads, it met our essential needs for handling large media files, and without requiring a specific UI or DOM elements. It also came with no dependencies on jQuery or other libraries, making it easy to add to our AngularJS front-end.
-- Andrew Kuklewicz, Tech Director prx.org, Lead Developer www.popuparchive.org.
In order to use this library, you need the following:
You need to create an Amazon S3 bucket for uploads
You need to edit your Amazon S3 CORS configuration to allow communication from your domain. Here is what I use:
<CORSRule>
<AllowedOrigin>[your domain]</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
You need to create a separate user in IAM
You need to set the bucket's policy to something like this (replace the [placeholders]
first) :
{
"Id": "Policy1417805650894",
"Statement": [
{
"Sid": "Stmt1417805616520",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::[your_bucket]/*",
"Principal": {
"AWS": [
"arn:aws:iam::[your_user_id]:user/[your_user_name]"
]
}
},
{
"Sid": "Stmt1417805647297",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::[your_bucket]/*",
"Principal": {
"AWS": [
"arn:aws:iam::[your_user_id]:user/[your_user_name]"
]
}
}
]
}
You need a backend to sign your REST requests (a Flask + SQLAlchemy one is available at example_backend.py). Here are code samples for creating the signing key: http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html
For detailed instructions about how each of the ajax actions should respond, read the source code; there are two actions:
signing_key
- returns a signature for authentication -- http://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html . Also returns key/upload_id/chunks if the file upload can be resumed. Should also return a backup_key to be used in case that the first one is not usable.chunk_loaded
- (optional) notifies the server that a chunk has been uploaded; this is needed for browser-refresh resume (the backend will store the chunks in a database, and give the user the file key + upload id + chunks uploaded for the file to be uploaded)If you'd want example backends in other languages/with other frameworks, let me know.
Navigate to the project's root, e.g. cd mule-uploader
Install requirements.txt: pip install -r requirements.txt
Set up environment variables:
export AWS_ACCESS_KEY=[your access_key]
export AWS_SECRET=[your access_key's secret]
export AWS_REGION=[your bucket's region]
export BUCKET=[your AWS bucket]
export MIME_TYPE=[your desired mime-type]
. Defaults to application/octet-stream
export DATABASE_URL=[your db url]
. Notice that the db url looks like postgres://user:password@location:port/db_name
or sqlite:///file
. Defaults to sqlite:///database.db
export PORT=[your desired port]
. Defaults to 5000
export CHUNK_SIZE=[chunk size in bytes]
. Defaults to 6MB i.e. 6291456
You can see and modify these options in settings.py
.
Run python example_backend.py
Navigate to http://localhost:[PORT]/
, where [PORT]
is the value given at 3.7.
Due to the new technology used by this library, it's only compatible with the following browsers:
License: MIT