ManimCommunity / manim

A community-maintained Python framework for creating mathematical animations.
https://www.manim.community
MIT License
19.95k stars 1.48k forks source link

Adding a Manim Showcase for Community Work #934

Open WampyCakes opened 3 years ago

WampyCakes commented 3 years ago

Over the last few weeks, I have been working on a showcase for Manim.

Use Cases

Current State of the Showcase

The Manim Showcase is fully functional as of now. An unofficial version can be viewed here. There is currently one sample video supplied by a community member to show how it looks. To be clear, this is not the finished end goal. It is just about ready to go live with the minimum features required to do so. There are a few other big features missing that can and are planned to be implemented later such as searching videos.

Features

These are the main points of the Showcase to provide an insight into how it works. Much time has been invested into smaller parts of the website as well outside of these main points to create a consistent and smooth experience.

Moving Forward

If this seems generally acceptable to the community, the next step would be to transfer it to be an official part of the ManimCommunity organization. Here's what this will entail:

Approval Process

Here is a sample of what a submission looks like. I envision the approval process to look like this:

Questions to be addressed

What should not be discussed here

leotrs commented 3 years ago

Thank you so much for the work here!

I have some scattered thoughts, here in no particular order:

  1. Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.
  2. Who gets to approve videos? Is there also a process for approving source code (if provided)?
  3. Where are the videos hosted?
  4. You said "Once a video has been submitted or approved, it can be edited by repository maintainers". What does it mean to "edit" a video here?
  5. "I do not plan on watching all of what is submitted." This brings up a few things. First of all, moderating user content can become a huge time sink, if the community grows large. Since we are putting this under the ManimCommunity organization, we could be held accountable for sensitive content. For example, do we want to moderate profane language or adult imagery? If so, then the approval process should involve watching the actual video in full length. If not, we can issue a blanket statement that the community devs are not responsible for the content, or some other such language.
WampyCakes commented 3 years ago
Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.

I recommend you go click the link for the showcase and hit the upload a video button. It says, "While optional, we highly encourage including source code!" If you want it to be a bit stronger, I am open to italicizing "highly." Who gets to approve videos? Whoever we designate. Likely that would be anyone in the Manim organization as it utilizes GitHub issues for managing submissions; though, we should have a few people who have committed to spending at least a little time approving videos. This question does lead to an interesting scenario that is rather feasible. We could designate members of the community who seem trustworthy to approve videos in the future. If that requires rejiggering a bit to avoid giving them permission in the org, that is possible. It would also probably allow for this to become a community effort in maintaining by speeding up approvals and giving a sense of contribution. Is there also a process for approving source code (if provided)? Nope. No need. It simply links to either a bitbucket, github, or gitlab repo. Where are the videos hosted? Users have the choice to use YouTube or Vimeo. It's merely an embed. Saves us many different kinds of headaches :) You said "Once a video has been submitted or approved, it can be edited by repository maintainers". What does it mean to "edit" a video here? Take a look at this. Those JSON values are what makes up a submission. Editing that data before approving would update the submitted info (and therefore show the updated version on the website). The automated process reads the JSON straight out of the issue body to allow for this. If a video has already been approved and needs edited, the value(s) for that submission needing updated could be edited here. That is the centralized file where all approved submissions go (until this potentially needs revamped years in the future). we can issue a blanket statement that the community devs are not responsible for the content, or some other such language. This seems the most feasible solution. There is no way that anyone should be forced to watch hours and hours of videos for free. I made this showcase and want to see it used, but there is no way I would do that. I would say that it will probably be the rare case something shouldn't be approved, so likely fairly lax standards on video content. Just skipping through the video should be sufficient. Language would require watching the whole thing so, again, probably that blanket statement.

jsonvillanueva commented 3 years ago

Awesome work here! I'm in favor of a community showcase as both a way to demonstrate what Manim is capable of (and in the case of source code -- showing how it was written).

Here are some of my suggestions, questions, and responses to a few things you and @leotrs mentioned above:

Suggestions

  1. Add a field for which verison of Manim the source code/video is written for -- this way videos can easily be reproduced using the appropriate version of Manim. This would be important to know with more versions of manim in the future.
  2. Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity)

Questions/Food for thought

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

Reponses

Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.

I recommend you go click the link for the showcase and hit the upload a video button. It says, "While optional, we highly encourage including source code!" If you want it to be a bit stronger, I am open to italicizing "highly."

I think it's fine as is, but italicize if you want/if you think it'll get more source code submission (which is helpful as @leotrs mentioned, manim is both the code and the video it produces).

Is there also a process for approving source code (if provided)?

Nope. No need. It simply links to either a bitbucket, github, or gitlab repo.

I wouldn't say no need as there's a potential security issue in not having a process for approving source code. It's unlikely that someone would try to harm the community with malicious code, but it's something to consider. If source code is provided, the specific commit in their repo should be scanned/reviewed before approval since it's possible a malicious user would later update their main/master branch after being approved. If we don't intend to vet the source code, we should at minimum issue a warning that the code hasn't been analyzed and the community should be cautious in running code they don't understand.

Who gets to approve videos?

Whoever we designate. Likely that would be anyone in the Manim organization as it utilizes GitHub issues for managing submissions; though, we should have a few people who have committed to spending at least a little time approving videos.

i.e. Community Dev/Org member with the Triage+ role

WampyCakes commented 3 years ago
Add a field for which verison of Manim the source code/video is written for

Good point! We can include a little info hover that tells them to run the manim version command to figure this out (and use a dropdown list of versions for consistency) Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity) Not a big fan of this one. I generally think that outside of potential language, NSFW videos shouldn't be approved in the first place.

I also worry that we are going to reach a point where the submission form becomes so long it is a deterrent to uploading videos, particularly when someone is uploading a whole bunch of their old videos that aren't on the showcase.

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

This is a difficult one. Hopefully editing and deleting is a rare occurrence. Because the issue is always posted by the same (automated) user, they would not be able to edit or reopen the issue. I know, this sounds like a big limitation. But it's because of restraints on implementation. Originally, the plan was to in order to submit a video, people would have had to sign in using GitHub OAuth (in which case the issue could be made by them). But because this is a static website, that's uh, pretty difficult, which lead to changing course. Plus, this doesn't require you sign in (which I always find to be a bonus as I am hesitant to use "Sign in with..."). In truth, this would require creating a new issue for now to request such changes if the issue has been closed. Open to ideas on alternatives, but I expect this to be a circumstance that puts us between a rock and a hard place.

Is there also a process for approving source code (if provided)?

Nope. No need. It simply links to either a bitbucket, github, or gitlab repo.

I wouldn't say no need as there's a potential security issue in not having a process for approving source code. It's unlikely that someone would try to harm the community with malicious code, but it's something to consider. If source code is provided, the specific commit in their repo should be scanned/reviewed before approval since it's possible a malicious user would later update their main/master branch after being approved. If we don't intend to vet the source code, we should at minimum issue a warning that the code hasn't been analyzed and the community should be cautious in running code they don't understand.

The only thing that could be changed on their end is what is contained in the repo. They can't change the URL, so traveling to the link should always be safe. Regarding running the code itself, while it may seem harsh, this isn't really our problem. When searching for any code on the Internet and finding results on GitHub, discernment on whether something should be run is always necessary. If someone wants to run something they haven't looked at, our warning won't stop them. They'll just burn themselves 🤷‍♂️

If you wanted to run a virus scan on the code, this brings up a few difficulties.

This showcase is not meant to endorse any content or affiliate any of it with the ManimCommunity org. If we want the blanket statement suggested by @leotrs that talks about video content to include source, that's fine. But I see this as any other website on the Internet. Use discernment when browsing, just like anything else.

leotrs commented 3 years ago

Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity)

Not a big fan of this one. I generally think that outside of potential language, NSFW videos shouldn't be approved in the first place.

I agree. I just wanted to say that this would require the approving dev to actually watch the entire video! :sweat_smile: So that should be part of the approval process.

leotrs commented 3 years ago

Also, I agree there is no need to check the code for security as long as all we show is the video (which is hosted elsewhere). If we add a link to the source code, then we can, again, issue a blanket statement saying that the ManimCommunity org has not approved the code etc.

WampyCakes commented 3 years ago

@leotrs We can just put a blanket statement at the bottom of the page, and, if you guys think it is absolutely necessary (I do not), we can create a modal that pops up the first time you visit the page with that statement. There is no feasible way to watch that many hours of content. I only meant that by skipping through the video or zooming your mouse along the YouTube timeline and looking at the popup, you could catch most NSFW imagery in videos, not language, which we should probably take a lax standard on (for language I mean) considering these are not official videos from the ManimCommunity org (obviously you cannot for sure catch everything that way). No matter what, I do not think watching every video is a good idea to require. I think the question is just what we want to do about it. Give one blanket statement (briefly checking what content is in the video before approving) and call it good? I think that may be sufficient. @jsonvillanueva Maybe this can become part of #935 enforcement procedures if someone reports a video? Checking content and making a decision? Or not. Just throwing it out there.

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

I think I have thought of a fairly good idea. We can put some instructions on the GitHub repo for the showcase that explain how to edit or delete their video. It could look something like this:

  1. Submitting a video requires giving your GitHub username (if they don't have one, this method will not work for them and they will have to contact someone asking for a change. But I imagine that most people who would do this have a GitHub account nowadays)
  2. They create a PR that edits the submissions.json file
  3. An automated check comments on the PR to tell us if the username of the person who made the PR matches the GitHub username of the person associated with the video and tells us exactly what was changed in the file (this is necessary because the JSON file is contained in 1 line to save space, which leads to difficult to read diffs).
  4. All we have to do is check the comment to see if it's good, and then click merge. That can trigger a GitHub action to update the corresponding closed issue for the submission (for record keeping in case the submissions.json file gets deleted or if the user deletes their video, we delete the issue too).

The one point of failure to this approach that I can think of is that the submissions.json file could be updated in between when the PR is created and merged. Am I correct that this would be a problem? If so, maybe we just make them do it through an issue with a certain label or something like that. If we are asking for their github username, we can check if it matches and automate it for the most part.

Truthfully, this entire thing would be best managed with its own dashboard and members with varying permission levels. But in the interest of staying open source, out in the open, free on a static site, and not requiring a user account to use, this is the next best thing I could come up with.

leotrs commented 3 years ago

Let's have a blanket statement somewhere in an about section or something like that. The modal is not necessary. I still think reviewers should be strongly encouraged to watch the video they are approving, if not required.

jsonvillanueva commented 3 years ago

I agree. I just wanted to say that this would require the approving dev to actually watch the entire video! 😅 So that should be part of the approval process.

Agreed. The approving dev would need to dedicate time to watching the full video to catch any (if we have any) violations. As much as I don't like the idea of using up their time for this, I very much don't like the idea of a showcase tied to the organization in name with potentially sensitive content being hosted -- unless there's a blanket statement that we haven't actually looked at the content and it isn't ours. That said, if the showcase is tied to the Org in the future --

@jsonvillanueva Maybe this can become part of #935 enforcement procedures if someone reports a video? Checking content and making a decision? Or not. Just throwing it out there.

-- it would become one of Manim's Online Spaces and protected by the Code of Conduct. So any reports on a video for violating the code of conduct would be liable by the enforcement procedure. Since the showcase is not yet hosted by the Org, we can wait to add this to #935 / the code of conduct -- but it'll likely require an update at the time this is finalized/hosted by ManimCommunity. At which point, it might be possible to approve the videos without requiring it -- so long as in the future, people watching the video know they can report it for any violations to the code of conduct.

I still think reviewers should be strongly encouraged to watch the video they are approving, if not required.

Definitely, should be encouraged if it's not required.

WampyCakes commented 3 years ago

Let's have a blanket statement somewhere in an about section or something like that.

In other words, at the bottom of the page since it's a SPA.

What's wrong with going that direction? A statement saying we haven't watched videos in their entirety and are not affiliated with or endorse any content (and whatever other words you want to include) in said videos. Coupled with reporting to the code of conduct enforcement team, that seems acceptable to me. I just don't want several weeks worth of work to be for naught just because no one wants to watch all of every video they approve. I think that the benefits of having a showcase, along with a statement and enforcement procedures, outweigh these downsides, especially when this would allow for a showcase that doesn't get a giant backlog of unapproved videos.

jsonvillanueva commented 3 years ago

... tied to the organization in name with potentially sensitive content being hosted -- unless there's...

What's wrong with going that direction?

I can't imagine any issue with this direction. This has my approval with the blanket statement/CoC/reporting system to prevent a giant backlog.

leotrs commented 3 years ago

I'm interested in what others have to say @behackl @eulertour @XorUnison @naveen521kk @huguesdevimeux @cobordism @Aathish04. As far as I can tell, the main issue of discussion right now is whether or not the procedure to approve a video to the showcase should involve actually watching the video (to check whether it contains sensitive content).

MysaaJava commented 3 years ago

Hello there ! I'm a huge fan of this showcase idea, but i have some questions/suggestions to make.

Well, i think that's it ! Don't mind if you have any questions, either on Discord or GitHub !

WampyCakes commented 3 years ago

i propose using three «reviewed states». The lowest is «submitted», and therefore hidden from the users. One a trusted reviewer accepted the title/subject/description etc, the video will be marked as «unreviewed», and shown to the user, but with a «unreviewed» mark on it. If the reviewer accepts the video entirely (meaning that he watched it entirely), the video tag changes to «reviewed» and the mark disappears.

I think that this may be a good idea if I am understanding you correctly. Other peoples' input would be good here too.

Maybe add other ways to post videos, maybe directly on the showcase GitHub, if the video is small enough (i don't know about GitHub size limits and quotas).

I think that we should avoid hosting anyone's content. That would seem to invite more liability issues that @leotrs seems concerned with. Also, it would be a pain from a technical standpoint compared to just an embed.

Make more content formats available. Even though manim makes video, one could also want to share a tiny animation he made, as a gif or a 7s video that won't be published on youtube.

I would like to include a section for GIFs and imgs made using Manim in the future. Again, we probably would not host it ourselves so there would probably be a requirement to host it on something like imgur and give us an embed link. Regarding short videos like that, it's probably more suitable they just make it into a GIF and upload it to that section in the future.

Also, maybe consider presenting whole playlist, video series or even channels. If someone makes a video series of 30 minutes episodes, is there really a point in showing every episode as a unique element in the showcase ?

I had this same thought. I am really not sure how to handle this. On one hand, I think they should all be separate because people will likely want to search for a specific part. For example, 3b1b's essence of calculus series contains videos on a lot of different topics. Being able to search for one or two relevant videos for specifically what you want is important. On the other hand, we may want to provide some sort of connection between these different videos. Not sure what form that would take. Any ideas?

Talking of video display, have you decided on a preset tag list or free string tags ?

I don't understand the question.

I've seen that every video is put in one «submissions.json» file. If a lot of videos are being posted, this file might get big and make the page load long. Would this be possible to store every video in separated json files ? Also, it might be because I'm a former web developer, but all of this stuff storing engine would really look great in a SQL database. And tag filtering would be so much easy ! But this method needs a server, unless free and reliable sql hosting services can be found.

I completely agree about a database. In an effort to keep this free, I avoided going that route. In the future, it may be necessary to migrate. It's using Lazy.js (I suggest you read this page to understand why) to load from the JSON file, and it will be used for filtering too in the future. The relevant code is here. It should be able to handle a pretty large number of submissions in the file for now, but yes, a migration will need to happen sometime I imagine. If someone is willing to pay for a database, then that can be used 🤷‍♂️

Then about the website, the captcha looks very weak, given there are few images available. And you can nearly spam the thing. Is there open source captcha-services ?

It is an open source captcha. It's called RVerify.js. The idea of the captcha is that most bots cannot slide an element, which is supposed to make it effective. If you fail the captcha, it goes on to the next image. What I did not realize until now is that it does not pick a new rotation angle on failure. I believe I can change it fairly easily so that it does. That should prevent spamming it since it would be pretty difficult to guess the correct angle randomly with only one guess (within a small tolerance which I can also adjust). I think the number of pictures is kinda irrelevant to its strength? More can always be added. Additionally, we could set a max-attempts on the captcha and store it as a cookie to expire after X amount of time. We could always include honeypots, but unless we want to make the honeypot visible to users, it may not be as effective as some bots detect for CSS hiding the honeypot.

I am 100% against using captcha services like Google's recaptcha. Before deciding on RVerify, I looked at a lot of options, and I did not like any of them as much as RVerify.

A last thing about the website is that it is not really responsive. I'm reporting because it is not a style-related issue, but rather an usability-related one. On my phone's browser, the video is squeezed to the left

It's not great on mobile, but I find it to be usable on my device. Right now the responsiveness it does have is a result of using Bootstrap, which I hadn't extensively used before. If/when this is published, would you mind making a PR to fix responsiveness issues you have?

WampyCakes commented 3 years ago

Not that anyone should feel pressured to pay for hosting for this (as I can't), but if this was hosted on a server, we could actually make this into a really nice showcase. It would eliminate the need for the Cloudflare Worker as there could be a php backend. We could also use a database for storing the data. I think we would still be able to deploy from GitHub after editing website source code. I think that it could also improve the submission approval process and everything. We could add reactions/a like button for videos on there. Lack of server (tho the pro of being free) is the real limiting factor on how amazing this showcase could be.

MysaaJava commented 3 years ago

Maybe add other ways to post videos, maybe directly on the showcase GitHub, if the video is small enough (i don't know about GitHub size limits and quotas).

I think that we should avoid hosting anyone's content. That would seem to invite more liability issues that @leotrs seems concerned with. Also, it would be a pain from a technical standpoint compared to just an embed.

It would only be for fully-reviewed content (which is not hard for images/gifs). And isn't it possible with pull requests ? You don't have to automate every part.

Make more content formats available. Even though manim makes video, one could also want to share a tiny animation he made, as a gif or a 7s video that won't be published on youtube.

I would like to include a section for GIFs and imgs made using Manim in the future. Again, we probably would not host it ourselves so there would probably be a requirement to host it on something like imgur and give us an embed link. Regarding short videos like that, it's probably more suitable they just make it into a GIF and upload it to that section in the future.

Also, maybe consider presenting whole playlist, video series or even channels. If someone makes a video series of 30 minutes episodes, is there really a point in showing every episode as a unique element in the showcase ?

I had this same thought. I am really not sure how to handle this. On one hand, I think they should all be separate because people will likely want to search for a specific part. For example, 3b1b's essence of calculus series contains videos on a lot of different topics. Being able to search for one or two relevant videos for specifically what you want is important. On the other hand, we may want to provide some sort of connection between these different videos. Not sure what form that would take. Any ideas?

Do both ^^ something that recognizes elements from playlists.

Talking of video display, have you decided on a preset tag list or free string tags ?

I don't understand the question.

By tag i meant things like «math» or «quantum mechanics» or «algorithm visualisation» or «manim capabilities example» or «educational video», things like this. Will the user be able to put any string as a tag, like on youtube videos, or will there be a list of tags made before, like on StackOverFlow.

I've seen that every video is put in one «submissions.json» file. If a lot of videos are being posted, this file might get big and make the page load long. Would this be possible to store every video in separated json files ? Also, it might be because I'm a former web developer, but all of this stuff storing engine would really look great in a SQL database. And tag filtering would be so much easy ! But this method needs a server, unless free and reliable sql hosting services can be found.

I completely agree about a database. In an effort to keep this free, I avoided going that route. In the future, it may be necessary to migrate. It's using Lazy.js (I suggest you read this page to understand why) to load from the JSON file, and it will be used for filtering too in the future. The relevant code is here. It should be able to handle a pretty large number of submissions in the file for now, but yes, a migration will need to happen sometime I imagine. If someone is willing to pay for a database, then that can be used man_shrugging

I wanted to discuss the possibility of having a server for manim community, but i didn't think it was the right place. For the time being, we can write everything on GitHub pages, and wait for a server to appear. I might discuss this on discord or in a separate issue later.

Then about the website, the captcha looks very weak, given there are few images available. And you can nearly spam the thing. Is there open source captcha-services ?

It is an open source captcha. It's called RVerify.js. The idea of the captcha is that most bots cannot slide an element, which is supposed to make it effective. If you fail the captcha, it goes on to the next image. What I did not realize until now is that it does not pick a new rotation angle on failure. I believe I can change it fairly easily so that it does. That should prevent spamming it since it would be pretty difficult to guess the correct angle randomly with only one guess (within a small tolerance which I can also adjust). I think the number of pictures is kinda irrelevant to its strength? More can always be added. Additionally, we could set a max-attempts on the captcha and store it as a cookie to expire after X amount of time. We could always include honeypots, but unless we want to make the honeypot visible to users, it may not be as effective as some bots detect for CSS hiding the honeypot.

I am 100% against using captcha services like Google's recaptcha. Before deciding on RVerify, I looked at a lot of options, and I did not like any of them as much as RVerify.

Agree with that, but i feel that this captcha is useless, as it can easily be cracked. few images means that you can make a bot that recognizes them all. And they are available in the repo source code. And storing failed attempts as cookies is inefficient as the server has no power on cookies, and a hacker could just reset his cookies all the time. I don't thing there is a good solution for now, maybe let it as it is.

A last thing about the website is that it is not really responsive. I'm reporting because it is not a style-related issue, but rather an usability-related one. On my phone's browser, the video is squeezed to the left

It's not great on mobile, but I find it to be usable on my device. Right now the responsiveness it does have is a result of using Bootstrap, which I hadn't extensively used before. If/when this is published, would you mind making a PR to fix responsiveness issues you have?

I would have loved to, but i'm completly unfamiliar with the html maker you used (this thing parsing .vue files). But i'll make an issue about that.

OK ! time to sleep now !

WampyCakes commented 3 years ago

It would only be for fully-reviewed content (which is not hard for images/gifs). And isn't it possible with pull requests ? You don't have to automate every part.

I'm just generally not a fan of storing user content as this could lead to its own storage issues and related issues. And I do kinda disagree about automation as any nice UX would not require the user to submit a PR for that.

By tag i meant things like «math» or «quantum mechanics» or «algorithm visualisation» or «manim capabilities example» or «educational video», things like this. Will the user be able to put any string as a tag, like on youtube videos, or will there be a list of tags made before, like on StackOverFlow.

Yes, the user can put anything. That's something that should be checked before approving a submission.

I wanted to discuss the possibility of having a server for manim community, but i didn't think it was the right place. For the time being, we can write everything on GitHub pages, and wait for a server to appear. I might discuss this on discord or in a separate issue later.

Agreed. A server would be very nice.

Agree with that, but i feel that this captcha is useless, as it can easily be cracked. few images means that you can make a bot that recognizes them all. And they are available in the repo source code. And storing failed attempts as cookies is inefficient as the server has no power on cookies, and a hacker could just reset his cookies all the time. I don't thing there is a good solution for now, maybe let it as it is.

I'm not saying it's crackproof, but I think it's a more effective solution than you think. From what I've read, it's difficult to make a bot slide anything. You'd still have to figure out what angle is correct by viewing the output of sliding, and in order to do all of this you'd need to really invest some time into trying to spam this form. Aren't most bots just crawlers all over the web? This would take a concerted attack to circumvent. I encourage you to try to make a bot to circumvent it. Just don't spam the submission form if you succeed and don't release the source code. I think it'll turn into a harder task than most spammers would be willing to do just to defeat 1 form type on an open source project. It's not like a major company or common captcha.

I only mentioned cookies because if you used JS to just keep track of failed attempts, a refresh is all that is needed to continue trying. Even if an errant form submission passes through, I think a bot would have a low success rate (no solution is foolproof, we only need to mitigate).

I would have loved to, but i'm completly unfamiliar with the html maker you used (this thing parsing .vue files). But i'll make an issue about that.

You could strip out Vue for testing and only use the HTML and CSS to correct it. If you made changes that fixed responsiveness, I can merge it in.

WampyCakes commented 3 years ago

Regarding languages, would this be a good implementation? We could add a language field to the submission form that asks what language the video is in. Default to English and use the values listed here. That way, responses for language are static and will be consistent. My real question is this: is that a good list of languages or is there a better list? I imagine we want to have languages listed in their local/native spellings and formattings as much as we can.

leotrs commented 3 years ago

+1 for asking for language

+1 for NOT using a server just now. If this blows up then we can move to a server.

+1 for using tags for different review states. (Maybe consider "unverified"/"verified" instead of "unreviewed")

WampyCakes commented 3 years ago

Small update:

eulertour commented 3 years ago

It seems weird that we rely on the PR to get the date but if we can't easily determine it in an automated way that's ok.

As for the reviewing, I'm in favor of having only a cursory review process, a prominent disclaimer regarding the videos and code on the site, and an easy way to flag videos that don't belong.

I'd recommend not using a server or any other type of external storage if we can avoid it. If we can't avoid it I'd recommend using a serverless solution so that maintenance is minimal. If we can't avoid running a server I'd recommend keeping the code there very minimal and dead simple. We don't have the resources to do real DevOps right now.

I think the idea of having people provide a GitHub username is fine, and doing so in an automated way with OAuth would be even better.

Other than that this looks good to me, but after is put up for real it will require someone paying attention for a while to make sure it doesn't break.

WampyCakes commented 3 years ago

It seems weird that we rely on the PR to get the date but if we can't easily determine it in an automated way that's ok.

It's not too weird. When the user completes the form, it sends the data they submitted as JSON to the Cloudflare Worker who makes the issue on GitHub. The date is provided by the showcase website when the form is submitted to say when it was submitted. If you mean that you think it should just pull the date of the issue, yeah, maybe. But right now it can parse the JSON straight out of the issue body so it's probably more work than it's worth (also I standardized the date on the website and I am unsure of what GitHub shows as the date. I assume they may localize it for each user). Also, I may need to have it change the date on approval. Reason is that the newest entries should be on the top of the page and if there is a backlog the most recently submitted video may not appear at the top.

I think the idea of having people provide a GitHub username is fine, and doing so in an automated way with OAuth would be even better.

It likely won't use OAuth simply because it's a static page. Yet another thing that a server would solve, though I know it's not very practical right now.

Other than that this looks good to me, but after is put up for real it will require someone paying attention for a while to make sure it doesn't break.

I will definitely be watching it for a while. I also have set up some error logging on Cloudflare. So between errors logged there and on GitHub workflows, issues should be able to be pinpointed for the most part.

WampyCakes commented 3 years ago

Moving Forward

@leotrs @MysaaJava @jsonvillanueva @eulertour @behackl In the interest of moving forward, I have compiled a list of what has been brought up above as necessary changes.

For liability purposes:

Feature improvements:

I think that that is all that is necessary to bring up again that was discussed above. I will also be working on a few other things before going live like implementing a language dropdown and other such stuff (which I don't see as something that will need much further discussion). Does this seem agreeable by everyone to move forward and get this thing live?

eulertour commented 3 years ago

I'm fine with going live with it after those changes are made

behackl commented 3 years ago

I appreciate the work you have put in this; and thank you for summarizing the current status!

We have talked about this a while ago, and essentially I haven't really changed my position. I do think that having a showcase page is an excellent idea -- but I also think that the application and in particular the submission process feels a bit over-engineered right now. In a first iteration, a static page where the submission data is read from the JSON which is modified directly via PRs by users [especially if a GitHub account is needed anyways] would have been just fine.

(Maybe I mainly feel that setting up a custom form-based submission workflow solves a problem that I'm not sure we really would have ran into; and in general: the fewer external services we depend on, the fewer things can break down at some point.)

In any case: I'm fine with moving forwards with your suggested implementation (including the changes that you mentioned). However, I'd leave further features (especially complicated ones, like the playlist management you mention above) out for now, and first observe in how far people will be using the platform.

leotrs commented 3 years ago

100% agreed with not overengineering things - I'd much rather iterate quickly and solve problems that we have, not problems we think we will have.

WampyCakes commented 3 years ago

@leotrs I'm not going to implement the editing process yet, but I'll add the field for a github handle for the future. I'll do the other changes soon and then I think we're good to go.

eulertour commented 3 years ago

We discussed this during the meeting and decided that the best approach is to proceed with it first and address any potential problems as they come up, so if you have the code available to deploy you're good to go.

WampyCakes commented 3 years ago

@eulertour I have a few last things I need to finish before it can go live. Very close, but I just have to get around to it.