froala / wysiwyg-editor

The next generation Javascript WYSIWYG HTML Editor.
https://www.froala.com/wysiwyg-editor
Other
5.27k stars 673 forks source link

imageUploadToS3 not working with React (NodeJS backend) #4530

Open dbruner23 opened 1 year ago

dbruner23 commented 1 year ago
Expected behavior.

As I understand it, this config option should upload photos inserted into the editor to AWS S3 bucket. I have set everything up to compute the hash in the NodeJS backend and deliver it successfully on component load with a useEffect. I'm quite sure everything is configured correctly on the S3 bucket as well as I am able to upload images fine outside the editor.

Actual behavior.

The upload to S3 just seems not to be triggered, as it just keeps storing inserted images in blob storage. I have tried setting imageUpload: true, and imageUploadURL: false... the former changes nothing as it is default, and the latter seems to just call the host url with endpoint localhost:3000/false. Any help would be much appreciated.

Steps to reproduce the problem.

Basically just configure uploadToS3 option on the editor in React with S3Hash computed in Nodejs and delivered via useEffect. I

Editor version.

Version 4.0.14

OS.

Mac OS Monterrey (12.4)

Browser.

Chrome

Recording.

Here's my list of imports

import FroalaEditor from 'react-froala-wysiwyg' import FroalaEditorView from 'react-froala-wysiwyg/FroalaEditorView'; import "froala-editor/js/froala_editor.pkgd.min.js"; import "froala-editor/js/plugins.pkgd.min.js"; import "froala-editor/js/third_party/image_tui.min.js"; import "froala-editor/js/plugins/image.min.js"; import "froala-editor/css/froala_style.min.css"; import "froala-editor/css/froala_editor.pkgd.min.css"; import "froala-editor/css/third_party/image_tui.min.css"

useState and useEffect to get s3Hash

const [s3Hash, setS3Hash] = useState("");

useEffect(() => { axios.get('http://localhost:4000/get_signature') .then((response) => { setS3Hash(response.data); })

},[])

Hash format

{ bucket: '', region: '', keyStart: 'uploads', params: { acl: 'public-read', policy: '', 'x-amz-algorithm': 'AWS4-HMAC-SHA256', 'x-amz-credential': '', 'x-amz-date': '20221007T000000Z', 'x-amz-signature': '****' } }

Code to implement editor and view

<FroalaEditor tag='textarea' model={model} onModelChange={handleModelChange} config={{ imageUploadToS3: s3Hash }} /> <FroalaEditorView model={model}
/>

rocketedaway commented 1 year ago

Any luck? I am running into a similar issue.

cunneen commented 1 year ago

I had this issue too, my workaround/solution is below. It's been a pain to solve/workaround, given the lack of examples and docs. It's hard to believe we bought a license for this; we should have checked the 945 open issues (and this one for which there's been absolutely no response for over 9 months) before committing to the purchase.

NOTE: DO NOT INITIALISE/RENDER THE FROALA EDITOR UNTIL YOUR S3 CONFIG IS COMPLETELY READY

^^ This was the issue that was hardest to solve. The problem was that our (parent) React component was rendering the Froala component when it (the parent) first rendered, without the imageUploadToS3 prop configured, then our useEffect hook ran, which configured the imageUploadToS3 prop, then (on a re-render) it passed the prop into Froala. But by then it was too late: Froala was initialised already and would never more upload to S3.

So in @dbruner23 's (OP's) case (and ours), the solution is to hold off the initial render of <FroalaEditor> until we have all of the input props ready (specifically the imageUploadToS3 prop). i.e.,

OLD

const [s3Hash, setS3Hash] = useState();
useEffect(() => {
  // .... Do AWS Stuff ...
  // .... Then ... :
  setS3Hash(/* .... { bucket: ...., region: ..., ... etc } */); // <== Of course, this causes a re-render
}, []);
// ...
return (
  <FroalaEditor
    tag="textarea"
    model={model}
    onModelChange={handleModelChange}
    config={{
      imageUploadToS3: s3Hash, // THIS IS THE PROBLEM HERE ... on the first render, s3Hash is undefined, and now it's too late
    }}
  />
);

NEW

  const [s3Hash, setS3Hash] = useState();
  useEffect(() => {
    // .... Do AWS Stuff ...
    // .... Then ... :
    setS3Hash(/* .... { bucket: ...., region: ..., ... etc } */); // <== Of course, this causes a re-render
  }, []);
  // ...
  return s3Hash ? ( // <== This is the fix: only render the FroalaEditor once s3Hash is defined
    <FroalaEditor
      tag="textarea"
      model={model}
      onModelChange={handleModelChange}
      config={{
        imageUploadToS3: s3Hash, 
      }}
    />
  ) : (
    <>Loading...</>
  );
cunneen commented 1 year ago

For anyone who comes across this challenge in future, here's a distilled version of our working solution that uses a Node.js AWS Lambda function to obtain the S3 hash:

Node.js Lambda Function -- app.js

/*
  This API is used to generate the S3 signature for uploading images to S3, specifically for the Froala WYSIWYG editor.
*/
const express = require("express");
const bodyParser = require("body-parser");
const awsServerlessExpressMiddleware = require("aws-serverless-express/middleware");
const getHash = require("./getHash");

const { fromNodeProviderChain } = require("@aws-sdk/credential-providers");

// load and cache AWS credentials
let credentials;
const getCredentials = async () => {
  if (!credentials) {
    credentials = await fromNodeProviderChain()();
  }
  return credentials;
};
(async () => { await getCredentials(); })(); // load credentials on startup

// declare a new express app
const app = express();
app.use(bodyParser.json());
app.use(awsServerlessExpressMiddleware.eventContext());

// Enable CORS for all methods
app.use(function (req, res, next) {
  res.header("Access-Control-Allow-Origin", "*");
  res.header("Access-Control-Allow-Headers", "*");
  next();
});

/**********************
 * GET a signature which allows the caller to S3 upload to our bucket and prefix *
 **********************/

app.get("/signature", async function (req, res) {
  const credentials = await getCredentials();

  const configs = {
    // The name of your bucket.
    bucket: process.env.STORAGE_BUCKETNAME,

    // S3 region. If you are using the default us-east-1, it this can be ignored.
    region: process.env.REGION,

    // The folder where to upload the images.
    keyStart: "public/",

    // File access.
    acl: "public-read",

    // AWS keys.
    accessKey: credentials.accessKeyId,
    secretKey: credentials.secretAccessKey,
  };

  if (credentials.sessionToken) {
    configs.sessionToken = credentials.sessionToken;
  }
  const s3Hash = getHash(configs);
  res.send(s3Hash);
  return true; // truthy value to indicate success
});

app.listen(3000, function () {
  console.log("App started");
});

// Export the app object. When executing the application local this does nothing. However,
// to port it to AWS Lambda we will create a wrapper around that will load the app from
// this file
module.exports = app;

Node.js Lambda Function -- getHash.js

const crypto = require("crypto");

function getHash(config) {
  // Check default region.
  config.region = config.region || "us-east-1";
  config.region = config.region == "s3" ? "us-east-1" : config.region;

  const bucket = config.bucket;
  const region = config.region;
  const keyStart = config.keyStart;
  const acl = config.acl;

  // These can be found on your Account page, under Security Credentials > Access Keys.
  const accessKeyId = config.accessKey;
  const secret = config.secretKey;
  const sessionToken = config.sessionToken;

  const date = new Date().toISOString();
  const dateString =
    date.substring(0, 4) + date.substring(5, 7) + date.substring(8, 10); // Ymd format.

  const credential = [accessKeyId, dateString, region, "s3/aws4_request"].join(
    "/"
  );
  const xAmzDate = dateString + "T000000Z";

  const policyConditions = [
    { bucket: bucket },
    { acl: acl },
    { success_action_status: "201" },
    { "x-requested-with": "xhr" },
    { "x-amz-algorithm": "AWS4-HMAC-SHA256" },
    { "x-amz-credential": credential },
    { "x-amz-date": xAmzDate },
  ];

  if (sessionToken) {
    policyConditions.push({ "x-amz-security-token": sessionToken });
  }

  policyConditions.push(["starts-with", "$key", keyStart]);
  policyConditions.push(
    ["starts-with", "$Content-Type", ""] // accept all files
  );

  const policy = {
    // 5 minutes into the future
    expiration: new Date(new Date().getTime() + 5 * 60 * 1000).toISOString(),
    conditions: policyConditions,
  };
  const policyBase64 = Buffer.from(JSON.stringify(policy)).toString("base64");

  function hmac(key, string) {
    const hmac = crypto.createHmac("sha256", key);
    hmac.end(string);
    return hmac.read();
  }

  const dateKey = hmac("AWS4" + secret, dateString);
  const dateRegionKey = hmac(dateKey, region);
  const dateRegionServiceKey = hmac(dateRegionKey, "s3");
  const signingKey = hmac(dateRegionServiceKey, "aws4_request");
  const signature = hmac(signingKey, policyBase64).toString("hex");

  return {
    bucket: bucket,
    region: region != "us-east-1" ? "s3-" + region : "s3",
    keyStart: keyStart,
    params: {
      acl: acl,
      policy: policyBase64,
      "x-amz-algorithm": "AWS4-HMAC-SHA256",
      "x-amz-credential": credential,
      "x-amz-date": xAmzDate,
      "x-amz-signature": signature,
      "x-amz-security-token": sessionToken,
    },
  };
}

module.exports = getHash;

client React component that uses Froala -- MyComponent.jsx

Note: we use AWS Amplify's Auth client-side library here to obtain a JWT token, that in turn lets the user invoke the lambda (i.e. only users already authenticated via AWS cognito can invoke our lambda); I've included that code here since, given that you're an S3 user, you're likely using other AWS services. In our case, the lambda sits behind API gateway, and API gateway does the checking of client credentials (i.e. the JWT)

// ==== NB: we use something like this and it works, but I've cut out a lot of stuff specific to our use case. 
// ==== So this may be broken, but hopefully not too badly.
import { API, Auth, Storage, graphqlOperation } from "aws-amplify";
import React, { useState, useEffect } from "react";
import "froala-editor/css/froala_style.min.css";
import "froala-editor/css/froala_editor.pkgd.min.css";
import FroalaEditor from "react-froala-wysiwyg";
import "froala-editor/js/plugins.pkgd.min.js";
import $ from "jquery";

window.$ = $;
window.FroalaEditor = require("froala-editor");

const MyComponent = () => {
  const [s3Hash, setS3Hash] = useState(); // <== THIS IS THE FROALA S3 CONFIG
  const [model, setModel] = useState(""); // <== THIS IS THE FROALA EDITOR CONTENT

  // call our lambda to get a froala S3 config, and load that S3 config into our state variables for the Froala component
  //  to consume.
  useEffect(() => {
    let mounted = true;
    if (mounted) {
      (async () => {
        // client is already authenticated via Cognito; this gets a JWT token so we can call the froala signature API i.e. our lambda
        const resp = await Auth.currentSession();
        let idToken = resp.getIdToken();
        let jwt = idToken.getJwtToken();
        // This performs an HTTP GET to our lambda (it's just AWS' wrapper around axios)
        const s3config = await API.get("FroalaS3SignatureAPI", "/signature", {
          headers: {
            Authorization: jwt,
          },
        });
        if (mounted) {
          // set the s3Hash state variable to the config returned by our lambda
          setS3Hash(s3config);
        }
      })();
    }
    return () => {
      // cleanup
      mounted = false;
    };
  }, []); // end of useEffect that gets the froala signature API config

  const handleModelChange = (model) => {
    setModel(model);
  };

  // ...
  return s3Hash ? (
    <FroalaEditor
      tag="textarea"
      model={model}
      onModelChange={handleModelChange}
      config={{
        imageUploadToS3: s3Hash
      }}
    />
  ) : (
    <></>
  );
};