ga-wdi-boston / team-project

Other
2 stars 39 forks source link

AWS - FileBucket - Download link #359

Closed artpylon closed 7 years ago

artpylon commented 7 years ago

My reading of the requirements is that we need to provide the user with both a download and an open link for the files that have been uploaded, is this correct?

Currently, I am getting back a link that downloads. I am unsure how to also get a link that opens.

I believe I know how to get either ONE of them, but not both.

My understanding is that it is based on the content type. Currently, the contentType I am passing to aws is octet-steam. Octet-stream seems to be the content type provided if the type of the file cannot be discerned by mime. This is as expected because I am defining content type with the obscured file name that has no extension. If I were to pass in the file's original name, such as myimage.png, mime would return the content type image/png and the link I get back would open instead of download.

If both open and download are requirement, I need help getting back both urls.

My current aws-upload file is below, it seems like I would need two s3Upload functions, one to upload the file and get the download link and another to get the open link (by passing in the other content type). When I get the second response from s3 I would add the open link to the existing file created by the first s3Upload function instead of creating another new file. However, it seems like there's a better solution.

'use strict'

// require dotenv and run load method
// this loads my env variables to a process.env object
require('dotenv').load()

// require file system module
const fs = require('fs')
const AWS = require('aws-sdk')
const mime = require('mime')
const path = require('path')
const crypto = require('crypto')

// create an instance of AWS.S3 object
const s3 = new AWS.S3()

const randomBytesPromise = function () {
  return new Promise((resolve, reject) => {
    // Generates cryptographically strong pseudo-random data.
    // The size argument is a number indicating the number of bytes to generate.
    // If a callback function is provided, the bytes are generated asynchronously
    // and the callback function is invoked with two arguments: err and buf
    // If an error occurs, err will be an Error object;
    // otherwise it is null.
    // The buf argument is a Buffer containing the generated bytes.
    // https://nodejs.org/api/crypto.html#crypto_crypto_randombytes_size_callback
    crypto.randomBytes(16, function (error, buffer) {
      // If an error occurs, err will be an Error object;
      if (error) {
        reject(error)
      } else {
        // More on buffer
        // https://nodejs.org/api/buffer.html#buffer_buffer
        console.log('inside crypto success', buffer.toString('hex'))
        resolve(buffer.toString('hex'))
      }
    })
  })
}

// s3Upload accepts file options as a param and returns a promise,
// that resolved or rejected base on s3.upload response
const s3Upload = function (options) {
  console.log('s3Upload options are ', options)
  // use node fs module to create a read stream
  // for our image file
  // https://www.sitepoint.com/basics-node-js-streams/
  // const stream = fs.createReadStream(options.originalname)
  const stream = fs.createReadStream(options.path)
  // use node mime module to get image mime type
  // https://www.npmjs.com/package/mime
  // const contentType = mime.lookup(options.originalname)
  const contentType = mime.lookup(options.path)
  // use node path module to get image extension (.jpg, .gif)
  // https://nodejs.org/docs/latest/api/path.html#path_path
  const ext = path.extname(options.path) // .png
  // const ext = path.extname(options.name)
  // const ext = mime.extension(contentType)

  // get current date, turn into ISO string, and split to access correctly formatted date
  const folder = new Date().toISOString().split('T')[0]

  // params required for `.upload` to work
  // more at documentation
  // https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property
  const params = {
    ACL: 'public-read',
    Bucket: process.env.AWS_S3_BUCKET_NAME,
    Body: stream,
    Key: `${folder}/${options.name}${ext}`,
    ContentType: contentType
  }

  // return a promise object that is resolved or rejected,
  // based on the response from s3.upload
  return new Promise((resolve, reject) => {
    // pass correct params to `.upload`
    // and anonymous allback for handling response
    s3.upload(params, function (error, data) {
      if (error) {
        reject(error)
      } else {
        resolve(data)
      }
    })
  })
}

const awsUpload = function (options) {
  return randomBytesPromise()
    .then((buffer) => {
      // set file name to buffer that is returned from randomBytesPromise
      options.name = buffer
      // return file so it is passed as argument to s3Upload
      return options
    })
    .then(s3Upload)
    .catch(console.error)
}

module.exports = awsUpload
jordanallain commented 7 years ago

there should be a url where the file is stored on aws after it has been uploaded no? can't you just create a link to that url?

artpylon commented 7 years ago

I am getting back one url from s3, because the content type i send when storing that file is octet-stream, it downloads when clicked.

If the content type sent to s3 matched the file type, it would open. That's my understanding at least.

Within my s3 account i can change the content type and it changes the link within the UI. When I match the content type to the file type, the link opens instead of downloads. That's what I'm basing this on.

jordanallain commented 7 years ago

what is the url you get back?

artpylon commented 7 years ago

https://mrg-wdi.s3.amazonaws.com/2017-06-29/a6966d9f685d8244ac0591c4e4cf9f23

artpylon commented 7 years ago

Is it a project requirement that both open and download be offered, or just one of the two?

jordanallain commented 7 years ago

i think one or the other is fine