ImpactMarkets / impact-markets-app

Other
2 stars 0 forks source link

Email digest of new donation registrations, new comments, etc. #63

Closed Telofy closed 1 year ago

Telofy commented 1 year ago

It might be best to send a summary at most once per day if there have been new donations, comments, etc. Especially in the case of donations, I expect that many users will want to register several of them at once. That would get annoying for the organizer if they were to get one email per donation.

Let’s have a quick call some time to brainstorm how to do this most simply.

This could help in any case: https://blog.devgenius.io/how-to-create-npx-script-cli-with-nodejs-3927f8bb91eb

  1. Events such as the creation of a donation enqueue an event into a database table. A nightly job processes the queue.
  2. All donations, comments, etc. have a creation time. The nightly job creates a digest of all of them from the past 24 h. If it fails to run, some events are lost.
  3. The notification email is sent immediately (“New comment/s on your project …”) but then there’s a 24 h cool-down period.

I like #1 most for now, but maybe there are more variations.

SendGrid is already integrated into the platform in the context of Next Auth. We can use the same credentials for all transactional emails. I can give you access to SendGrid.

DavidRV00 commented 1 year ago

Notes from meeting:

We'll go with option # 1 from the list above, and I've receieved the sendgrid api key from Dawn to get started on this.

DavidRV00 commented 1 year ago

Here's the plan:

I expect this will take about 7 commits (which may or may not each get their own PR) in about this order:

plan gv

And the design of the entire system looks like this:

design gv

Telofy commented 1 year ago

Oh wow, that’s so comprehensively defined! :heart_eyes:

We'll use node-cron to make scheduled jobs

I don’t think that would be convenient or necessary. I already use the normal cron on my server for all sorts of tasks (mostly nightly backups), so I can just use it as well to call some endpoint every night like 0 6 * * * cd ... && npx notifications send | /usr/bin/logger -t notifications. All I’ll need for that is an endpoint like npx notifications send or npx send-notification (not sure about the naming conventions) that I can call from cron.

We'll implement a sort of basic events queue (which will actually just be a basic prisma+postgresql table) using a strategy similar to this one.

That’s a bit more fancy than will be necessary. That queue supports concurrent access. I expect the email sending to take a minute and to happen every 24 h, so it’s very unlikely that two jobs will overlap, and if we later add other jobs, they’ll be for different types of events. So I don’t think you need to worry about concurrency.

Something like (id, time, type, parameters, recipient, status) maybe? status could be pending (initially), dropped (recipient has opted out of notifications), completed (sent). parameters could be a JSON field with arbitrary stuff that can go in the emails, like donation amount, donation date, comment ID, project URL, etc.

Maybe you can easily introduce a status processing, but only if it’s not a lot of extra work. ^.^

During development we'll keep event-sending turned off, and only turn it on by default after adding the notification settings to the settings page, as the last step in the project.

At a previous company I did it the way that in all environments other than production all emails that aren’t addressed to *@companyname.com were dropped. Maybe you can do the same with *@impactmarkets.io outside of production? I can set up an @impactmarkets.io forward for you. :-)

DavidRV00 commented 1 year ago

Oh wow, that’s so comprehensively defined! heart_eyes

Thanks! I was looking for an excuse to practice some system design and learn some graphviz features haha :+1:

I already use the normal cron on my server for all sorts of tasks (mostly nightly backups)

Oh great, even better. I'll just make the scripts and you can add the crontab lines.

That’s a bit more fancy than will be necessary. ...

Sounds good, and your idea for the table sounds fine to me.

At a previous company I did it the way that in all environments other than production all emails that aren’t addressed to *@companyname.com were dropped. Maybe you can do the same with *@impactmarkets.io outside of production? I can set up an @impactmarkets.io forward for you. :-)

Sounds good to me. That should be pretty convenient for testing and production. Let me know if I need to do anything for the email forward.

Thanks!

Updated plan and design after your comments:

plan gv design gv

DavidRV00 commented 1 year ago

FYI, I've sent the first PR just for adding the basic stuff to the Prisma models. I'll continue working on this on Monday.

DavidRV00 commented 1 year ago

Bit of progress on the emails themselves (early preview below). Obviously the look and info still needs work, but we've got this being put out by an npx cli script using MJML for the html rendering :+1:

I've had some trouble today getting the new npx cli script to import modules from the im-app codebase (it's kinda weird having it sit inside of the same package as a kind of separate entrypoint, and I'm not sure if we want to it be a totally separate package or what), which we really want because the script will want to query the Prisma tables using the code we already have. Hopefully I can figure out the best way to do that tomorrow.

notifemails

Also @Telofy friendly ping on the PR I sent out a couple days ago. No huge rush; it's not blocking anything for me yet. Just making sure it's on your radar :+1:

Telofy commented 1 year ago

The design looks plenty spiffy! :smiley:

Did you find a solution to the importing problem? I would’ve thought that there are standard ways to do that, like with Django and management commands? :thinking:

I’ll look into the PR! You said you’d still add to it, so I hadn’t prioritized checking it yet.

Telofy commented 1 year ago

I’ve had a look at the PR. Great start!

Another idea on the entry point: Maybe a standard tRPC backend like /api/trpc/user.sendEmails, but it checks that the request comes from localhost. If the request doesn’t come from localhost, it’s a 403. Then I can additionally block the URL in Nginx (reverse proxy in front of our production instances). To send the emails, I can use curl to call the URL from the cron job on the server. That should solve all headaches with module imports. I’m a bit worried about request timeouts, but I control those, so that should be solveable too (unlike on Heroku!).

DavidRV00 commented 1 year ago

Thanks!

I haven't quite got it working yet (but I've been sick the past couple days, so a little slow :cold_face: )

Yeah there are standard ways of doing it; the most common way seems to be using npm link, which I think can work, but now I'm running into some mysterious errors with typescript filetypes and such. I'll ask about it on the Discord soon and see if anybody is familiar with it, before I spend much more time diving down that rabbit hole.

I'm sure a new tRPC backend would work, but I worry about adding complexity and bloat to the API, maybe leading down a path where lots of stuff gets shoehorned into an API where they don't really belong :grimacing: . But on the other hand it's a pretty small service and I doubt it would really be problematic. Can we call that the Plan B if I don't sort out the local imports pretty quick?

Telofy commented 1 year ago

Oh, get well! :-o I’m sick too…

I'm sure a new tRPC backend would work, but I worry about adding complexity and bloat to the API, maybe leading down a path where lots of stuff gets shoehorned into an API where they don't really belong grimacing . But on the other hand it's a pretty small service and I doubt it would really be problematic. Can we call that the Plan B if I don't sort out the local imports pretty quick?

Sure! But it would be very well encapsulated there in one function. You could even create a separate file job.ts (/api/trpc/job.sendEmails) to encapsulate it even better. That shouldn’t introduce any complexity outside the file. Or are we misunderstanding each other?