daisycrego / jbg-admin

Admin app for Jill Biggs Group follow-up boss event management
1 stars 0 forks source link

`eventsCreated ` WebHook --> `events/` POST endpoint #6

Closed daisycrego closed 3 years ago

daisycrego commented 3 years ago

Goals

Set up a POST endpoint for new events. Pass this to a WebHook for eventCreated/personEvent/whatever. Whenever an event is created, see if a person already exists in the database before the last 10 minutes, for example.

daisycrego commented 3 years ago

WebHooks references:

How to Create a Webhook

Webhook Best Practices - from FUB API

Setup your system to de-couple receiving webhook events from fetching the resource specified in the event. For example--the web service that receives events could record the event in a local database table. A separate backend process could be created to monitor this table for new rows and fetch the resource specified in the event. By separating these, the web service that handles receiving webhooks is doing less and is, therefore less likely to fail. The process that processes the events can contain all the complex logic to fetch the resource from Follow Up Boss, reconcile with your local system and mark the event as processed in the table. If this process fails or breaks it can be debugged independently of the web service. The table that stores events is the source of truth as to what webhook events have been received and processed.

Debugging webhooks

Since webhooks require a publicly accessible URL to function they can be hard to test from your local machine. It is recommended that you use a service like Request Bin when you are developing your webhook endpoints.

Common webhook mistakes

daisycrego commented 3 years ago

Based on the advice from the FUB API developers, the next step is to look into how to set up a background process to watch for new Events coming in as webhooks, to process them. Decouple the new webhooks coming in from the events to process the data. The eventsCreated webhook should send POST requests to our /events endpoint once we register the webhook, and then that POST endpoint is only responsible for creating a new WebHook doc in the db. The background process is going to fire every time a WebHook doc is created.

daisycrego commented 3 years ago

Instead of having a background process "watch" for a new row in the database, we can make an additional POST to another route which does the complex work, but with the expectation that the route doesn't return to us. We use the new route, events-process/ to handle all of the processing, including potential email follow ups. This is how we decouple, but we also need to avoid redundant processing by keeping track of whether an email was already sent out for an event or not, whether or not an event already has isNewLead set or not.

Yes - to avoid redundant emails, etc, the isNewLead attribute of the Events model will not be required in the schema. If it is null, we know we need to continue process this event, otherwise we stop processing.

daisycrego commented 3 years ago

Plan

  1. Set up the eventsCreated webhook. This should be a process that runs whenever the app loads - something like, make a POST request to check if the webhook is set up. If it isn't, set it up.
    • There is a webhooks/ GET route to see which webhooks exist. If no webhook exists, we will create one. It might be a little wasteful to be checking for this, but at the same time (1) we only will run it when the app launches, somewhere in the Express logic, so this way it's really not being run often. And this is a fail safe against a catastrophic issue - the webhook somehow gets messed up. However, we NEED to make sure we're not erroneously setting up too many webhooks or something like that. I don't know what the best practice is for this, but at least if the system has the capability of knowing what's wrong (there is no webhook set up), we have some kind of record of that. Rather than creating the webhook in a one-off thing and not having a mechanism in the system to reestablish the webhook.
daisycrego commented 3 years ago

Reminder: Here was how I handed the other API requests:

const addNewEvents = async (events) => {
  console.log(`addNewEvents(): Processing events ${events.map((x) => x.id)}`);
  if (!events || !events.length) {
    console.log(
      `addNewEvents(): !events or !events.length, events: ${events}, returning FALSE`
    );
    return false;
  }
  let continueReading = true;

  for (const eventObj of events) {
    if (!eventObj) {
      continue;
    }

    console.log(
      `Processing event: ${eventObj.id}, created: ${eventObj.created}, updated: ${eventObj.updated}`
    );

    const today = new Date();
    const two_days_ago = today.getTime() - 1000 * 2 * (60 * 60 * 24);
    const created = new Date(eventObj.created);
    const oldData = created.getTime() < two_days_ago;

    if (oldData) {
      console.log(`The event is > 2 days old, skipping`);
      return false;
    } else {
      console.log(`Less than 2 days old, processing...`);
    }

    const existingEvent = await Event.find({ id: eventObj.id });
    if (!existingEvent.length) {
      const newEvent = new Event(eventObj);
      try {
        await newEvent.save();
      } catch (err) {
        console.log(err);
      }
    }
  }

  return continueReading;
};

const syncEventsHelper = async (url) => {
  const BASIC_AUTHORIZATION = config.basicAuth;
  const options = {
    method: "GET",
    headers: {
      Accept: "application/json",
      Authorization: `Basic ${BASIC_AUTHORIZATION}`,
    },
  };
  try {
    const result = await fetch(url, options);
    const eventsData = await result.json();
    if (eventsData.errorMessage) {
      console.log(
        `Got an errorMessage rather than events data from FUB, here's the rest of the result:`
      );
      console.log(result);
      console.log(result.status);
      if (result.status === 429) {
        console.log(
          "Too Many Requests! Checking the retry-after from the headers:"
        );
      }
      // wait for retry-after seconds and then `continue` in the while loop, which will result in the query being run again for the same URL
      setTimeout(() => {
        syncEventsHelper(url);
      }, result.headers.get("retry-after") * 1000);
    }
    if (eventsData.events && eventsData.events.length) {
      console.log(
        `Fetched the following events: ${eventsData.events.map((x) => x.id)}`
      );
      const continueFetching = await addNewEvents(eventsData.events);
      console.log(
        `Processed those events using addNewEvents(), which returned ${continueFetching}`
      );
      if (continueFetching) {
        url = eventsData._metadata.nextLink;
        return url;
      } else {
        return false;
      }
    } else {
      return false;
    }
  } catch (err) {
    console.log(`error: ${err}`);
    return false;
  }
};

const syncEvents = async (req, res) => {
  // Prepare an authenticated request for the FUB API with a starting URL
  // Rather than use a while loop, use some kind of helper function or recursion!
  // Try to make a fetch from the FUB API
  // If the response
  let currentUrl = "https://api.followupboss.com/v1/events?limit=100&offset=0";
  currentUrl = await syncEventsHelper(currentUrl);
  while (currentUrl) {
    // update the currentUrl or exit the while loop
    currentUrl = await syncEventsHelper(currentUrl);
  }

  res.status(200).json({
    message: "Sync done",
  });
};
daisycrego commented 3 years ago

FUB API devs recommend using Request Bin to handle testing the webhooks. Ok. Went there and found this: https://pipedream.com/apps/http?utm_source=requestbin.com&utm_medium=referral&utm_campaign=home&utm_content=http&_ga=2.17321786.268013912.1622126127-1900480021.1622126127

Given the small scale of this project, something like separating the webhook testing between production and development is looking more and more unnecessary. I should get a bare version of the app deployed somewhere so that I have an HTTPS endpoint and avoid all of this pipedream work, as cool as it is...

daisycrego commented 3 years ago

Deployed the app to heroku: https://jbg-admin.herokuapp.com/ Worked through a lot of the initial build errors, there seems to be an issue if we use webpack 5. Using webpack 4 for now seems fine, but this could be an issue down the line that needs to be investigated further. For now, I was able to fix the issue by making sure to use the same version of webpack used by the original mern-skeleton from SH.

Related issues/links:

daisycrego commented 3 years ago

The heroku builds stopped working inexplicably. Submitted a ticket here: https://help.heroku.com/tickets/995054?n=1

I checked Heroku Status, but there isn't an active issue open for this. However, there were recently errors with dynos timing out during the build: https://status.heroku.com/incidents/2284

daisycrego commented 3 years ago

Now on the Heroku Status we see that there are in fact others having the same timeout issues, it's not my code. Reverting back to make sure all of my create webhooks code is running once the boot problem is resolved by their engineers.

daisycrego commented 3 years ago

Updated: dynos are booting again. NTS: If there are future timeout issues, make sure to report a ticket on Heroku and check Heroku Status for updates to get things moving along.

daisycrego commented 3 years ago

New issue: Making a POST request to the /v1/webhooks FUB endpoint returned the response: { errorMessage: 'Invalid fields in the request body: 0.' }. Here is the code I'm using to make the request:

import fetch from "node-fetch";
import config from "../config/config";

const url = "https://api.followupboss.com/v1/webhooks";
  const BASIC_AUTHORIZATION = config.basicAuth;
  const data = {
    event: "eventsCreated",
    url: "https://jbg-admin.herokuapp.com/api/events/fub/callback",
  };
  const body = JSON.stringify(data);
  const options = {
    method: "POST",
    headers: {
      Accept: "application/json",
      Authorization: `Basic ${BASIC_AUTHORIZATION}`,
      "X-System": "jbg-admin",
    },
    body: body,
  };
  try {
    const result = await fetch(url, options);

I wrote an email to the FUB API to understand more what that error message means, because I am sending all the right things.

I only technically need to make this POST request once in the lifetime of the app, as long as the webhooks are never deleted on the FUB API side and need to be re-established (which was why I went to the trouble of adding this as code instead of just making a POST request in Paw. I wanted this to be a repeatable process. When the app is built, if there are no webhooks, make them.

I will wait to hear back from FUB and also try and make my requests exactly the same as the Paw requests (update some headers maybe)...

daisycrego commented 3 years ago

Whether we're creating a webhook through Paw or we're able to actually create one in Node.js, this is what an event coming into our new webhook endpoint looks like:

req.body:
{
    eventId: '231f384e-d64a-4b10-8540-5d4d8ee289f8', 
    eventCreated: '2021-05-27T18:01:49+00:00',
    event: 'eventsCreated',
    resourceIds: [ 89799 ],
    uri: 'https://api.followupboss.com/v1/events/89799'
}
daisycrego commented 3 years ago

I tried adding an extra "Content-Type" header to copy the Paw request, and that actually seemed to resolve that error, but now I'm getting: { errorMessage: 'Missing required field in the request: system.' }. This is odd because I'm sending the X-System header, but maybe I'm typing that header incorrectly...

daisycrego commented 3 years ago

Tried changing the system name to remove the hyphen (jbg-admin -> jbgadmin). Still got the same error, though.

daisycrego commented 3 years ago

Alright so I fixed the original issue, which I emailed FUB API support about, by specifying utf-8 encoding in the HTTP headers, Content-Type: application/json; charset=utf-8. That got me out of that very weird 0. error. But I was still missing the correct system/X-System identifier. Apparently you register a system name when you do this the first time, so you can't really change it afterwards? Anyway, the name of the system is jbg-admin and ALL requests, including GET requests to the v1/webhooks/ route, need to do one of the following:

The GET request to the v1/webhooks endpoint was actually what was failing, because I was checking and double-checking the POST request and knew that it contained the X-System header. It was actually the GET request, which requires an X-System header apparently once a system name is set, but will allow you to make some requests without an X-System header initially? Not really that important but needed to know it to fix that problem. Now we have a webhook to receive new events. We're getting to the business logic to decide whether a new event is a possible Zillow exemption or not, and then either way, store it in the database.

daisycrego commented 3 years ago

Within the /api/events/fub/callback route which the FUB webhook will send POST requests to with new event details, we need to run the majority of the work outside of the route itself, in a background process. To do this, we can use redis perhaps, or maybe we can add a mongoose middleware to perform a task on the Node.js server every time that a new document is created.

Ideas:

daisycrego commented 3 years ago

https://stackoverflow.com/questions/13582862/mongoose-pre-save-async-middleware-not-working-as-expected

Pre-hooks and Post-hooks - Mongoose API

Parallel middleware offer more fine-grained flow control.

var schema = new Schema(..);

// `true` means this is a parallel middleware. You **must** specify `true`
// as the second parameter if you want to use parallel middleware.
schema.pre('save', true, function(next, done) {
  // calling next kicks off the next middleware in parallel
  next();
  setTimeout(done, 100);
});

The hooked method, in this case save, will not be executed until done is called by each middleware.

Post middleware

Asynchronous Post Hooks

// Will not execute until the first middleware calls next() schema.post('save', function(doc, next) { console.log('post2'); next(); });


- Rather than scheduling a job to handle processing the event, we could perform that work in a middleware every time a new `Event` document is saved. This way, if the event document is missing all the required details when we save it, we will run a process to try and recover those details, and if we find out that the event is a new Zillow event which might be a Zillow exemption, we can send an email. 
- Will we run into any issues if an update is made to an Event in some time in its lifecycle, so then the middleware will inadvertantly be run. We need an extra check in the post-save hook so we only perform this processing once. We could have a boolean, `alreadyProcessed`, which is initially `false`. That way we avoid sending out any emails twice.
- With something like redis (scheduled/background jobs), we will also be putting infrastructure in place to schedule follow ups. We can make sure that now, and 4 hours from now, and 6 hours from now, you get the scheduled follow up emails if that's what you wanted. But focusing on that could be YAGNI. 
daisycrego commented 3 years ago

Todo

Use a background process (redis) or a post-save mongoose hook on the Event schema to handle:

daisycrego commented 3 years ago

Let's go with the redis background process approach. Setting up a system to schedule and run these jobs is going to be extendible to future features of this app, and will be a good lesson overall. Adding the mongoose hook to the Event save is still the fallback (use a post save hook, examples here and here, but let's go with redis first. Opened another issue: https://github.com/daisycrego/jbg-admin/issues/7