we have a simple "main service" that will be access whenever a new user signs up for notifs. the main service will log the user's details (in tbl_Users), class details (in tbl_Classes), and create a new order entry (in tbl_Orders).
if the user wants to change their order (i.e. disable notifications for a class/re-enable them), they can do so via the Twilio API, which will then communicate with the "main service" API to set the "is_active" column in the appropriate entry in tbl_Orders to 0 (or 1 if the user wants to re-enable).
the scraper will work by calling the main service after a set amount of time (e.g. every minute) to get the list of classes from tbl_Classes and do its thing for all those classes. it'll then send a notification to all users registered for all classes that have been found to be open and call the main service again to set the "is_active" column in the appropriate entry in tbl_Orders to 0.
option 2:
same idea, but this time all the tables are in the scraping service. this is to avoid the scenario of the scraping service having to call another service with a completely separate database super often like in option 1
so in essence, whenever we have a user signup or some other action, we call an api in the scraper job directly to update the tables so that the scraper is more centralized and doesn't have to go out very far to fetch data
option ??:
other approaches can also be talked about/chosen (can be similar to these above 2 or completely different)
I think a single server is better to avoid performance issues (?), however it introduces complexity problems that we need to deal with.
I would go with Option 2!
option 1:
we have a simple "main service" that will be access whenever a new user signs up for notifs. the main service will log the user's details (in tbl_Users), class details (in tbl_Classes), and create a new order entry (in tbl_Orders).
if the user wants to change their order (i.e. disable notifications for a class/re-enable them), they can do so via the Twilio API, which will then communicate with the "main service" API to set the "is_active" column in the appropriate entry in tbl_Orders to 0 (or 1 if the user wants to re-enable).
the scraper will work by calling the main service after a set amount of time (e.g. every minute) to get the list of classes from tbl_Classes and do its thing for all those classes. it'll then send a notification to all users registered for all classes that have been found to be open and call the main service again to set the "is_active" column in the appropriate entry in tbl_Orders to 0.
option 2:
same idea, but this time all the tables are in the scraping service. this is to avoid the scenario of the scraping service having to call another service with a completely separate database super often like in option 1
so in essence, whenever we have a user signup or some other action, we call an api in the scraper job directly to update the tables so that the scraper is more centralized and doesn't have to go out very far to fetch data
option ??:
other approaches can also be talked about/chosen (can be similar to these above 2 or completely different)