Kiro47 / MTU-Transfer-Course-Gatherer

I got tired of looking for classes to transfer in by hand, so here we are.
Mozilla Public License 2.0
4 stars 2 forks source link

General Discussion / Future Enhancements #37

Open codetheweb opened 4 years ago

codetheweb commented 4 years ago

Wanted to get some thoughts down before I forgot.

Assuming we generalize to start scrapping all offered courses and make them available in a similar view to what we have currently, here's a few features I think would be nice:

Blu3spirits commented 4 years ago

I like the general idea of having a basket. But I do not like the idea of managing said basket across updates/iterations. What happens when that course information is no longer valid? What happens when the user tries to load an invalid state or get an invalid item from the api? Thus, I propose that upon page reload, their state is removed.

Additional scraping could be done (or even just redirecting to the RMP's query url) Their Query URL is as follows: https://www.ratemyprofessors.com/search.jsp?query=FIRSTNAME+LASTNAME

I'm not entirely sure I see the value of having Chips for lab free/Online/Lab over just having some form of text or icons

Dropdown in the form of Collapse would be nice for the rich list information

Blu3spirits commented 4 years ago

I'm currently working on expanding the models to increase our data with real MTU classes.

Current issues that I would like your guys' ideas on are:

codetheweb commented 4 years ago
  1. Personally, I don't see the value in keeping courses around for longer than a year (so have 2 semesters worth of history). I assume the CRN + term is unique? So we could probably use that for a primary key.
  2. I would say that some information duplication is fine at this point since we're not modifying data after scrapping it. For professors specifically, it probably makes sense to have a separate table.
Blu3spirits commented 4 years ago

I don't see the value of historical data from a front end perspective. But I can definitely see value in keeping it around for other people to use only in the backend. Especially to see past year professors in case the current posting is TBA.

Kiro47 commented 4 years ago

I like the general idea of having a basket. But I do not like the idea of managing said basket across updates/iterations. What happens when that course information is no longer valid? What happens when the user tries to load an invalid state or get an invalid item from the api? Thus, I propose that upon page reload, their state is removed.

I disagree with this rather heavily actually. I know way too many people who would close the tab and go back to it later (perhaps across a span of a couple of days selecting courses, as most people don't decide in a night). However, I do see why not to, as it's much easier.

What happens when that course information is no longer valid?

From the notes about the back-end you've made about basically keeping the data forever we should really just be able to have a "last updated field" per se, since the state is also containing more or less most of the object we can just make a reference and maybe some outdated warning notification on the gui object for it?

What happens when the user tries to load an invalid state or get an invalid item from the api?

Not sure what exactly is an "invalid state" at this point. As for requesting an invalid item from the API, without the prior invalid state knowledge, seems pretty obvious to me at least you'd just be sending back some 4xx HTTP code and we can make a corresponding GUI element of course, probably with an option to delete.

How we handle saving said data depends on how we want to approach this overall. If we want to handle login information (or MTU SSO), obviously we should be storing it on the back-end in a table. However, if we want to go with a no-auth setup and have it be device persistent we could go with something like a cookie storing the information or something like localforage for device level saving.

Kiro47 commented 4 years ago

Assuming we generalize to start scrapping all offered courses and make them available in a similar view to what we have currently, here's a few features I think would be nice:

For this in particular, before we go about doing this I kind of would like to look more into the moral area of this. I know it's technically possible, but I also know the way to do it is a very long URL with a lot of blank and weird parameters so I'm not sure it should be a possible thing. I'd rather look at this first to make sure we won't have any form of legal issues with this later.

Kiro47 commented 4 years ago
* Dropdown for each row with offered sections

* [Chips](https://material-ui.com/components/chips/) for lab fee (if applicable), whether it's online, whether it's a lab

* Have a virtual 'basket' of courses

  * Specific sections can be added to the basket, and marked as 'maybe' or 'planing to take'
  * Basket would have a method to show all CRNs in an easy-to-copy list

I'm a fan of most of this tbh, I've used a couple other class signup/registration systems that have all of these features and it's pretty convenient.

  * Basket would have a method to filter all courses in table by schedule-compatible

Schedule compatibility can be weird sometimes, if we want to do it only by classes that's fine.

* Scrape RateMyProfressors and include in results

Big fan of this if we are able to go down the actual class route, would make life a lot easier. Would we want to do some kind of modal pop-up with this on clicking the url or just open a new tab to the page?

Kiro47 commented 4 years ago

I'm currently working on expanding the models to increase our data with real MTU classes.

Current issues that I would like your guys' ideas on are:

* How to deal with class history? CRNS aren't unique across years, lots of duplicated information.

* There's possibility of lots of new tables to make foreign key relations into, however most of these new tables would be 1-2 fields. With professor info there would be more attributes we can collect (such as url). Any more information ideas?

1. Personally, I don't see the value in keeping courses around for longer than a year (so have 2 semesters worth of history). I assume the CRN + term is unique? So we could probably use that for a primary key.

2. I would say that some information duplication is fine at this point since we're not modifying data after scrapping it. For professors specifically, it probably makes sense to have a separate table.

I will happily die on the hill that the data's primary key should be a UUID. It's stable, effective, guarantees no conflicts, and is easy to use.

We can also totally just have all of that data (crn, course_id, semester, year, etc) in separate columns, then do search queries pretty easily off of that. Trying to create a master formula for UUID creation can and will be a mess.

codetheweb commented 4 years ago

Trying to create a master formula for UUID creation can and will be a mess.

While that's true, having a primary key based off the data could make our lives a bit easier (depending on how we set up scraping). For example, if we already have a database of scraped courses and do an 'update' scrape, how do we tell if newly scraped courses are new or already existing in our database?

How we handle saving said data depends on how we want to approach this overall. If we want to handle login information (or MTU SSO), obviously we should be storing it on the back-end in a table. However, if we want to go with a no-auth setup and have it be device persistent we could go with something like a cookie storing the information or something like localforage for device level saving.

If we go the route of persistent baskets, I would much rather do it all locally (local storage or whatever) instead of storing them on the backend and requiring users to log in.

Would we want to do some kind of modal pop-up with this on clicking the url or just open a new tab to the page?

It depends on how we present the rest of the course information, but I think some kind of popup / hoverable model with a scannable overview of the professor would be nice, and then the user could click something to open the full RateMyProfessors page in a new tab.

Kiro47 commented 4 years ago

Trying to create a master formula for UUID creation can and will be a mess.

While that's true, having a primary key based off the data could make our lives a bit easier (depending on how we set up scraping). For example, if we already have a database of scraped courses and do an 'update' scrape, how do we tell if newly scraped courses are new or already existing in our database?

It would be as simple as the current proposition of a data created key, except split out slightly instead of string building.

SELECT EXISTS(SELECT id FROM some_table WHERE crn='some_crn',  year='some_year', semester='summer/fall/winter')
# Perhaps also a "last modified" date lookup to compare against as well, mentioned above.

Just would need to combine that into part of a database function where we essentially have

if not exist(from above)
  then: INSERT
else:
  do nothing as the entry is there.
Blu3spirits commented 4 years ago

It should be noted that with Django we should almost never be touching the DB directly.

Just wanted to post that rq before I respond to the rest of this thread.

Kiro47 commented 4 years ago

It should be noted that with Django we should almost never be touching the DB directly.

Just wanted to post that rq before I respond to the rest of this thread.

The SQL statement performed was more of just "this is how it'd work", if there's a Django component for similar results, that's great. I'm several years more familiar with SQL than Django.

Unless I'm misunderstanding it'd be the scraper performing this anyways, which works independently. There is the argument of how do we define it when calling the scraper from the Django app portion for updates, however.

Blu3spirits commented 4 years ago

From the notes about the back-end you've made about basically keeping the data forever we should really just be able to have a "last updated field" per se, since the state is also containing more or less most of the object we can just make a reference and maybe some outdated warning notification on the gui object for it?

Creating this field is trivial in ease, however the implementation on the UI side is going to be the ugly bit. Now the workflow would be:

As for requesting an invalid item from the API, without the prior invalid state knowledge, seems pretty obvious to me at least you'd just be sending back some 4xx HTTP code and we can make a corresponding GUI element of course, probably with an option to delete.

To give an example of an invalid state, an object that is loaded from last year because it's saved, and this year's data HAS a matching CRN but it's not the same course. This is why I think creating a good composite key is a good idea.

This of course is also dependent on us actually exposing Tech's real class schedule and not just "valid transfer courses"

For this in particular, before we go about doing this I kind of would like to look more into the moral area of this. I know it's technically possible, but I also know the way to do it is a very long URL with a lot of blank and weird parameters so I'm not sure it should be a possible thing. I'd rather look at this first to make sure we won't have any form of legal issues with this later.

Technically possible and I have looked into actually formatting the data and have an example

I will happily die on the hill that the data's primary key should be a UUID. It's stable, effective, guarantees no conflicts, and is easy to use.

This sounds great and I agree with you. But, for the current dataset a UUID isn't going to work we need something 100% unique but also easily searchable IE:

CRN: 12345
Year: 2020
if 123452020 entry doesn't exist:
    create it
if it does exist:
    if all fields match:
        ignore
    if all fields don't match:
        update

Unless I'm misunderstanding it'd be the scraper performing this anyways, which works independently. There is the argument of how do we define it when calling the scraper from the Django app portion for updates, however. Nope your understanding is correct.

One thing I've been tossing around in my head is do we really want to continue just straight up using the Django models? We lose some functionality in doing so. IE: Serialization and Validation. However, this means making the API NOT read only which I'm not a total fan of. I think it should stay read only.

Kiro47 commented 4 years ago

From the notes about the back-end you've made about basically keeping the data forever we should really just be able to have a "last updated field" per se, since the state is also containing more or less most of the object we can just make a reference and maybe some outdated warning notification on the gui object for it?

Creating this field is trivial in ease, however the implementation on the UI side is going to be the ugly bit. Now the workflow would be:

* Load from cookie

* Load from API

* If cookie information isn't up to date based on the API information notify

I've basically done this before except it was loading from local data instead of cookies, I never found it to be that bad personally. @codetheweb Do you have a comment on this?

As for requesting an invalid item from the API, without the prior invalid state knowledge, seems pretty obvious to me at least you'd just be sending back some 4xx HTTP code and we can make a corresponding GUI element of course, probably with an option to delete.

To give an example of an invalid state, an object that is loaded from last year because it's saved, and this year's data HAS a matching CRN but it's not the same course. This is why I think creating a good composite key is a good idea.

Okay so, a composite key and your 123452020 example are different things. Composite keys I'm fine with, the concatenation of fields I'm not.

CRN: 12345
Year: 2020
if 123452020 entry doesn't exist:
   create it
if it does exist:
   if all fields match:
       ignore
   if all fields don't match:
       update

What I don't understand is what's so hard about:

CRN: 12345
Year: 2020
obj = django.objects.filter(crn=CRN, year=Year)
if obj.exists():
  if all fields match:
    pass
  else:
    update()
else:
  create()

One thing I've been tossing around in my head is do we really want to continue just straight up using the Django models? We lose some functionality in doing so. IE: Serialization and Validation. However, this means making the API NOT read only which I'm not a total fan of. I think it should stay read only.

On the python level perhaps, but staying read only should be pretty trivial at the API level. I know there's some Django back-end magic that generates most of the REST for us, but disabling everything that's not read only shouldn't be difficult, or would it?

Blu3spirits commented 4 years ago

From the notes about the back-end you've made about basically keeping the data forever we should really just be able to have a "last updated field" per se, since the state is also containing more or less most of the object we can just make a reference and maybe some outdated warning notification on the gui object for it?

Creating this field is trivial in ease, however the implementation on the UI side is going to be the ugly bit. Now the workflow would be:

* Load from cookie

* Load from API

* If cookie information isn't up to date based on the API information notify

I've basically done this before except it was loading from local data instead of cookies, I never found it to be that bad personally. @codetheweb Do you have a comment on this?

As for requesting an invalid item from the API, without the prior invalid state knowledge, seems pretty obvious to me at least you'd just be sending back some 4xx HTTP code and we can make a corresponding GUI element of course, probably with an option to delete.

To give an example of an invalid state, an object that is loaded from last year because it's saved, and this year's data HAS a matching CRN but it's not the same course. This is why I think creating a good composite key is a good idea.

Okay so, a composite key and your 123452020 example are different things. Composite keys I'm fine with, the concatenation of fields I'm not. Specifically what I meant is Django's unique together: https://docs.djangoproject.com/en/dev/ref/models/options/#unique-together

I'll be the first to admit specific verbage isn't my strongsuit. What I meant is a relation between two columns that forms something unique which I think could be CRN/YEAR.

CRN: 12345
Year: 2020
if 123452020 entry doesn't exist:
   create it
if it does exist:
   if all fields match:
       ignore
   if all fields don't match:
       update

What I don't understand is what's so hard about:

CRN: 12345
Year: 2020
obj = django.objects.filter(crn=CRN, year=Year)
if obj.exists():
  if all fields match:
    pass
  else:
    update()
else:
  create()

Actually now that I'm thinking more about this. Django has update_or_create() and if we are keeping data around for archival purposes we don't need to worry about queryset keys (which is where I was going originally). So this would actually look something like:


CRN = 12345
year = 2020
some_attribute = foo

obj, created = Model.objects.update_or_create(crn=CRN, year=year, some_attribute=foo)


With this plus the unique_together meta column mentioned earlier, we shouldn't have any issues if we ever expose real Tech classes using the maybe:tm: endpoint. Even if we don't necessarily use unique_together I don't think we'll have issues.
> > One thing I've been tossing around in my head is do we really want to continue just
> > straight up using the Django models? We lose some functionality in doing so.
> > IE: Serialization and Validation.
> > However, this means making the API NOT read only which I'm not a total fan of. I think
> > it should stay read only.
> 
> On the python level perhaps, but staying read only should be pretty trivial at the API level. I know there's some Django back-end magic that generates most of the REST for us, but disabling everything that's not read only shouldn't be difficult, or would it?

Specifically what I meant here is that do you guys think we should be using, in code, someModel.objects.create(some_data...). Rather than just using requests.post(endpoint, data)
codetheweb commented 4 years ago

@Kiro47 all I was trying to say was that if you're gonna filter by 4 different fields anyways to see if the instance is unique, you may as well just create a key based off those fields. Then you only have to compute the field once, instead of potentially having a bunch of copy/pasted filter statements. And yeah, I was thinking a composite key, not just concatenating fields together. Doing a composite key + Django's update_or_create would be a pretty clean solution IMO. Edit: sniped by @Blu3spirits

  • Load from cookie

  • Load from API

  • If cookie information isn't up to date based on the API information notify

I think local storage would probably be more appropriate than cookies, but the logic for the UI really shouldn't be that bad. Just needs to compare basket courses against the courses returned by the API, update them, and if something doesn't exist show a warning (could also show a warning if a saved section is full or something).

codetheweb commented 4 years ago

Specifically what I meant here is that do you guys think we should be using, in code, someModel.objects.create(some_data...). Rather than just using requests.post(endpoint, data)

Are you asking whether we should use REST instead of connecting to the database directly for loading the scraping data?

Blu3spirits commented 4 years ago

Specifically what I meant here is that do you guys think we should be using, in code, someModel.objects.create(some_data...). Rather than just using requests.post(endpoint, data)

Are you asking whether we should use REST instead of connecting to the database directly for loading the scraping data?

Yes, basically this.

Blu3spirits commented 4 years ago

Specifically what I meant here is that do you guys think we should be using, in code, someModel.objects.create(some_data...). Rather than just using requests.post(endpoint, data)

Are you asking whether we should use REST instead of connecting to the database directly for loading the scraping data?

Yes, basically this.

Well, not really. Models don't connect to the DB directly. THere's still another layer of protection, just not as many.

codetheweb commented 4 years ago

🤷 I don't see many advantages to REST over using models besides maybe being able to run the scraper on a different machine (although maybe I'm missing something).

Blu3spirits commented 4 years ago

A few just off the top of my head:

I'm not totally for this option. I just wanted to get everyone's thoughts about it. I much prefer just using models as it's easier to code (and doesn't require another script to be checked into git)

codetheweb commented 4 years ago

I would rather use models for now and move to REST later if it becomes an issue / we need those benefits.

As far as hosting goes, I was thinking that it'd be nice to have 1 'officially' hosted version that we can publicize, ideally not hosted from someone's house. Something like Google's Kubernetes Engine or Dokku could work.

Blu3spirits commented 4 years ago

I think that's a really far out issue to tackle. I would rather not pay for GKE, nor would it be a good idea to have just one person pay for it. For now, I and everyone else that contributes has a fully secure way to host so I'm fine with things as is. Especially now when there are two 'services' to run.

Blu3spirits commented 4 years ago

If we all decide we all want to host and want some way to all get hits on the sites, because that'd be kinda cool. Then we can figure out a load balancing solution that points to everyone's servers in a round-robin fashion.

codetheweb commented 4 years ago

I think that's a really far out issue to tackle.

Yep, just wanted to start thinking about it now.

If we all decide we all want to host and want some way to all get hits on the sites, because that'd be kinda cool. Then we can figure out a load balancing solution that points to everyone's servers in a round-robin fashion.

That sounds really cool, I like it.

Kiro47 commented 4 years ago

Specifically what I meant here is that do you guys think we should be using, in code, someModel.objects.create(some_data...). Rather than just using requests.post(endpoint, data)

Definitely pushing to use Django's model create/update functions directly. Opening up a REST create endpoint has the potential for a load of things we really don't need. Syncing of datasets you mentioned previously can happen (and probably should) happen at the DBMS level (pgpool, and slony are two I've heard a lot of good about).

@Kiro47 all I was trying to say was that if you're gonna filter by 4 different fields anyways to see if the instance is unique, you may as well just create a key based off those fields. Then you only have to compute the field once, instead of potentially having a bunch of copy/pasted filter statements. And yeah, I was thinking a composite key, not just concatenating fields together. ~Doing a composite key + Django's update_or_create would be a pretty clean solution IMO.~ Edit: sniped by @Blu3spirits

Continuation of previous statement, I'm fine with composites. Just the way it was being formatted made it seem like y'all wanted a form of string concatenation as the key.

I think local storage would probably be more appropriate than cookies, but the logic for the UI really shouldn't be that bad. Just needs to compare basket courses against the courses returned by the API, update them, and if something doesn't exist show a warning (could also show a warning if a saved section is full or something).

I agree that local storage is the way to go really, specifically there's a layer called localForage that's been super useful for working with local storage (adding in async, db types, etc.).

If we all decide we all want to host and want some way to all get hits on the sites, because that'd be kinda cool. Then we can figure out a load balancing solution that points to everyone's servers in a round-robin fashion.

I was thinking about this a couple weeks ago, I think arguably the easiest and most "available" way to go about it would to run the front-end site on GitHub pages, with the round robin back-end selection implementation thrown in. So it would just end up selecting off of one of the official back-ends in the code.

codetheweb commented 4 years ago

I was thinking about this a couple weeks ago, I think arguably the easiest and most "available" way to go about it would to run the front-end site on GitHub pages, with the round robin back-end selection implementation thrown in. So it would just end up selecting off of one of the official back-ends in the code.

That's a nice solution, the only issue would be if we wanted to make the API available for other people to use. I guess we could do DNS failover for api.domain.com for third-party consumers and then mirror that list in JS for browsers with the benefit of not having to wait for DNS propagation in the event of a node going down.

Blu3spirits commented 4 years ago

I was thinking about this a couple weeks ago, I think arguably the easiest and most "available" way to go about it would to run the front-end site on GitHub pages, with the round robin back-end selection implementation thrown in. So it would just end up selecting off of one of the official back-ends in the code.

That's a nice solution, the only issue would be if we wanted to make the API available for other people to use. I guess we could do DNS failover for api.domain.com for third-party consumers and then mirror that list in JS for browsers with the benefit of not having to wait for DNS propagation in the event of a node going down.

The api should just be thepage.whateverdomain.com/api/ (since that's what Django answers to). Unless you do some wack things with your ingress to take: api.domain.com -> api.domain.com/api/

Kiro47 commented 4 years ago

I was thinking about this a couple weeks ago, I think arguably the easiest and most "available" way to go about it would to run the front-end site on GitHub pages, with the round robin back-end selection implementation thrown in. So it would just end up selecting off of one of the official back-ends in the code.

That's a nice solution, the only issue would be if we wanted to make the API available for other people to use. I guess we could do DNS failover for api.domain.com for third-party consumers and then mirror that list in JS for browsers with the benefit of not having to wait for DNS propagation in the event of a node going down.

The api should just be thepage.whateverdomain.com/api/ (since that's what Django answers to). Unless you do some wack things with your ingress to take: api.domain.com -> api.domain.com/api/

Different issue, what was being talked about was more along the lines of balacing between domain1.com/api and domain2.com/api, where to send the REST requests to from the JS frontend.

codetheweb commented 4 years ago

Right.

The other solution would be Cloudflare, they supposedly have 0-downtime failover (for free) if you proxy your traffic through them. A benefit of that would be no exposed IP addresses. I'd rather not make my IPv4 public unless it's necessary.

Blu3spirits commented 4 years ago

What are your guys' thoughts on lock files for the python side of our project? I ask this because currently, we're using npm which has it's own lock files but on the Python side, we're not even version requiring our packages. It's currently not an issue but there's not a single doubt in my mind that it will fail eventually.

codetheweb commented 4 years ago

I like Pipenv.

But I think for now simply adding version numbers to requirements.txt is fine.

Blu3spirits commented 4 years ago

Personally I've been using poetry in my projects.

I'll make an issue to add versions to requirements.txt

Blu3spirits commented 4 years ago

@Kiro47 @codetheweb

Do you guys think we should start doing github releases at some point? I don't use them very often but it could help us in terms of automation pipelines.

codetheweb commented 4 years ago

🤷 I don't see them as being very useful currently, but if we want to mark commits as known stable / tested thoroughly, releases + a master/develop branch strategy might make sense.

Kiro47 commented 4 years ago

:man_shrugging: I've considered it, and we're probably at a point we could start doing tagged release. So nothing against it, we just need to kind of ish decide on what constitutes further tagged build releases.

Blu3spirits commented 4 years ago

Maybe we should start doing epics? Or define out the "next step". IMO the next step would be to get the cart-like system in place, as well as most current open tasks/bugs

Blu3spirits commented 4 years ago

Quick update for you @codetheweb. Kiro contacted IT and thus far they haven't responded yet :disappointed: no big shock there.

codetheweb commented 4 years ago

😞 thanks for the update. Would love to get this working in time for spring registration if possible.

Blu3spirits commented 4 years ago

disappointed thanks for the update. Would love to get this working in time for spring registration if possible.

I'm game! I won't have much of a use for this project sadly as I'm graduating in a few weeks. But I'm still interested in this and it's fun to do in off-time.

What all do you feel we should get done before registration? (Minus the real class scraper, ofc) Also @Kiro47 same question to you

codetheweb commented 4 years ago

By 'get this working', I was referring to the class scrapper 😛 But at a minimum it'd be nice to set up better hosting, potentially round-robin hosted on GitHub pages like @Kiro47 suggested.

codetheweb commented 4 years ago

Heard from @Blu3spirits today that IT gave us the go-ahead to scrape class data, as long as we're not obnoxious about it. 👍

Had a few more ideas that I wanted to jot down:

Blu3spirits commented 4 years ago

Probably doesn't make sense to include in v1, but another cool feature would be a way to generate prerequisite graphs for either a single course or all courses in your current basket. Maybe we extend this to persist data on what courses you've already taken and cross those off the graph? Lots of different ways we could go with this.

I actually like this idea a lot. Create sort of a tree graph based on what's needed before you can take the class. It's possible as well if you follow the href from the CRN, it lists pre-reqs. However, in order to do this we will have to recursively search based on each class, this would best be handled by the database itself, imo.

Another feature that's not necessary for v1 (although much lower effort than the one above): a way to auto-generate hotkey scripts that can be used during registration based on the courses in your basket.

I disagree that this isn't necessary for v1. I think this is actually critical to have for v1 if this project has any real use outside of something students look at and say "neat, it has a better UI/UX than banweb"

Finally, GitHub workflows can be run on a cron schedule. So it'd be possible to have a job execute every 10 minutes that runs the scrapper and outputs the result to a JSON file that's then saved in the repo somewhere. Dealing with dynamic data this way would let us neatly sidestep the whole issue of keeping different database nodes and backends in sync. With this solution, the static frontend would be hosted by GitHub pages and it would load the data from that saved JSON file rather than an API.

I don't quite like this solution as I'm really not a fan of the current workflow we have of:

In my opinion, this would be faster to consolidate into one script that gets the data, breaks it into Object format and places it in the DB. The script could also be run in file drop mode for those that just want a CSV/TSV/JSON file.

codetheweb commented 4 years ago

Made two new issues: https://github.com/Kiro47/MTU-Transfer-Course-Gatherer/issues/58, https://github.com/Kiro47/MTU-Transfer-Course-Gatherer/issues/59.

In my opinion, this would be faster to consolidate into one script that gets the data, breaks it into Object format and places it in the DB. The script could also be run in file drop mode for those that just want a CSV/TSV/JSON file.

I completely agree with you, but I don't see what that has to do with the possibility of generating static JSON files to pull from.

codetheweb commented 4 years ago

Can move the conversation about hosting here: https://github.com/Kiro47/MTU-Transfer-Course-Gatherer/issues/60

codetheweb commented 4 years ago

Also, do either of you have a strong opinion on TypeScript? Was thinking about rewriting the frontend since we're gonna be making major changes anyways.

Blu3spirits commented 4 years ago

Also, do either of you have a strong opinion on TypeScript? Was thinking about rewriting the frontend since we're gonna be making major changes anyways.

I would be open to trying it however I know the pain point is going to be defining all the interfaces we're going to need to use. Not that that's really a bad thing since it should help us write good front end tests. I've used it a bit on other projects and it's quite helpful for some things.

I can also seeing us now needing Redux, or some other store if others are super opposed, passing context around for cart management, API data loads, user store data, and anything else we care to collect is going to be a pain point in the future.

codetheweb commented 4 years ago

👍 I'm a big fan of TypeScript so I'll go ahead and make a card.

I'm not that opposed to Redux (especially since we're already using it 😛). Can be a lot of boilerplate but it works well enough. Was already planning to store basket state in it.