Open Moosems opened 9 months ago
This clearly would have to implement GitHub APIs, but is the data request quota(like 10k requests or so) on our(owner/repo/website) side or the client's? By client I mean a browser ofc
Who said you need GitHub APIs? Whatever happened to web scraping?
Seriously not understanding web scraping here. How are you going to implement this in such a way that it wouldn't be limited by number of queries, or something else?
An API call is made every time you request a website. This API call requires no API key or anything. To use the API alone, you are limited by queries and are required a key. Therefore: requesting a webpage and then parsing the file returned allows for infinitely more calls to GH than a free API key. This, however, does have a negative side effect: a massive slowdown. If this is something we cannot tolerate, we could perform all actions through a server that keeps track of all this data and updates ever week or so. This would include the members of the group, the projects publicly available, and a files for each project (the readme parsed to html, the image, and the base html document for the JS to push everything into).
Downside to this is it can only update periodically so API calls don't get out of hand.
Finds names of all public repos and dynamically creates the page from the app readme. For images for the apps we should make sure that all repos have a
images/demo_img.png
for the website to pull.