n5ro / aframe-physics-system

Physics system for A-Frame VR, built on CANNON.js.
https://n5ro.github.io/aframe-physics-system/
MIT License
505 stars 136 forks source link

Add GitHub workflow to test Chrome and Firefox #168

Closed Galadirith closed 3 years ago

Galadirith commented 3 years ago

Description

Add CI testing for aframe-physics-system using GitHub Actions. This tests on all three operating systems that GitHub Actions currently support, Linux (by ubuntu-18.04), MacOS (by macos-10.15), and Windows (by windows-2019). This also tests on both Chrome and Firefox through their headless modes.

As there will hopefully be contributions coming from many more developers, I think it's important to get CI testing setup as you can't always assume developers are running tests locally.

Changes proposed

Here are all my reasons for the proposed changes. However these should be considered candidate proposals, so please raise any questions or concerns if you don't agree with my reasons 😊

Galadirith commented 3 years ago

Additional changes proposed

Galadirith commented 3 years ago

If GitHub actions are not already enabled on n5ro/aframe-physics-system then they can be by going to https://github.com/n5ro/aframe-physics-system/settings/actions.

MignonBelongie commented 3 years ago

This all looks great to me, but I don't have any experience with GitHub Actions or TravisCI, so I can't tell by looking at it if it would actually work. How did you test it? I don't see any evidence that you tested it on your fork, but maybe I'm missing something.

Galadirith commented 3 years ago

@MignonBelongie thank you so much for reviewing this for me. You can see my branch over on my fork at https://github.com/Galadirith/aframe-physics-system/tree/add-github-test-workflow. For the test of this on my fork if you take a look at the commit history of that branch and click on the tick next to any one of the first 3 commits that are the commits for this PR that will open up a summary panel of the GitHub Workflow jobs that have run, and if you click on the details then that will take you to the log page for that job.

add-github-test-workflow

All the workflows that have run in my fork can be seen under https://github.com/Galadirith/aframe-physics-system/actions and the workflow associated with the last commit 92343a2 in the PR is at https://github.com/Galadirith/aframe-physics-system/actions/runs/287498246. You can click on any one of the 6 jobs on the left hand side in that workflow to get more details about that job.

MignonBelongie commented 3 years ago

Now I see - this is great! Technically, though, it should be tested with something that makes it fail, too. I'm guessing it doesn't reject the commit, but just has a red X instead of a green checkmark, right?

Galadirith commented 3 years ago

I agree that would be a useful test to do. Yes it's exactly as you say, it doesn't reject the commit, but you get a ❌ if the tests, or any of the other steps fail. I'm not sure what is best practice to test the tests, but I will quickly make a branch off of add-github-test-workflow with change to the source that will technically brake a test, and hopefully we will see if fail.

I haven't used this myself, but with this we can enforce that status checks (ie GitHub workflows) pass for any PR merge to master (About required status checks). Only you and @Micah1 have the permission to merge to master right now, so it's perhaps not a critical feature, but something that could be enabled.

MignonBelongie commented 3 years ago

I see the red X. @Micah1, I think we should merge this.

Galadirith commented 3 years ago

I've made the breaking change in https://github.com/Galadirith/aframe-physics-system/commit/607a5a1867cd94a082525ab9b2c6dca492be4c42 and you can see that the workflow has failed.

Galadirith commented 3 years ago

If you have any other questions please do let me know, I would much rather this is scrutinised well now 😊 The same will apply to all my PR's especially if it's code that others are not too familiar with. I should be able to answer any queries, and if I can't it's definitely a sign it shouldn't be merged 😊

Galadirith commented 3 years ago

Right now, if one of the matrix jobs fails, then it will cancel the others matrix jobs. We can change that behaviour if it's preferred using the jobs.<job_id>.strategy.fail-fast attribute. But I think that we would expect if the test's fail in one of the browser/os combinations, then they will fail in all of them, so we might as well stick with the default behaviour and just have the entire workflow be canceled when one job fails.

n5ro commented 3 years ago

I've been reading the Introduction to GitHub Actions page https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions/introduction-to-github-actions#workflows tonight, and while like @MignonBelongie I also do not yet have hands on experience with Github Actions from what I have read so far using GitHub Actions to do CI Testing seems like a good idea as a first scan, in the same way that a medical imaging expert might use deep learning to do an initial analysis of a radiology scan, but then the radiologist would come in and still do their own process for evaluating the records. The CI Testing would not replace us from doing our own testing before approving PR's. It might point out something that we possibly may have missed otherwise. It may also help us to expediate the process of identifying issues, and that could save us time. I would like to see some sort of Testing process for the leading VR and AR platforms such as the Oculus Quest with the Oculus Quest controllers, the Valve Index with their controllers, the Window Mixed Reality, the Microsoft Hololens 2, and the numerous other AR and VR devices that are coming to the market soon with the Nreal being an example. The Oculus Quest uses the chrome browser, but it would be neat to have a test that confirmed that controller based interactions or hand tracking based interactions were producing page error. I'm not sure yet what it is really going to take to make that sort of testing a reality.

Galadirith commented 3 years ago

in the same way that a medical imaging expert might use deep learning to do an initial analysis of a radiology scan, but then the radiologist would come in and still do their own process for evaluating the records

Thanks for taking the time to read through some background material on GitHub actions. This is an interesting analogy, although I don't think it applies here. The tests that run as part of the CI are exactly the same as you would run locally. However if there are relevant procedural tests that you are running locally that are not a part of the implemented test suites then it would be really useful to open up an issue where we can look at how to implement them.

The CI Testing would not replace us from doing our own testing before approving PR's. It might point out something that we possibly may have missed otherwise. It may also help us to expediate the process of identifying issues, and that could save us time.

I agree it is always important to do manual checks and test stuff locally. However the CI tests really are supposed to test everything about the library. APS doesn't actually have very good coverage at the moment, but as we solve issues and build up the test suite it would be equivalent to anything we can run locally. Because the CI tests are run across Linus, MacOS, and Windows, on Chrome and Firefox, it's likely that the CI tests would actually be more thorough than any local testing, because it's unlikely that developers would test in multiple browsers locally, and very unlikely they would test on multiple OS's. I certainly only have access to one OS locally.

I would like to see some sort of Testing process for the leading VR and AR platforms

This is an interesting idea, and I agree it is important to test against specific hardware where relevant. However this is outside the scope of this repo. APS is independent of any AR or VR hardware, so there is nothing in the code base that needs testing against specific hardware. However A-Frame proper is not hardware independent and does implement components that are hardware specific. And they do perform tests for specific hardware, for example the Oculus Touch controllers. So we don't need to reproduce similar tests here.

MignonBelongie commented 3 years ago

Couldn't you make the argument that testing on different headsets is like testing on different browsers and OS's? After all, APS is also independent of any specific browser or OS. It would be nice if we had some automated tests that tested the output on any headset, but I have no idea how one would do such a thing.

Of course, this is in no way a reason not to merge this PR.

Galadirith commented 3 years ago

Thanks @MignonBelongie Yeh that's a really good point. I should have been clearer in what I said, and really I meant that there is no point in testing anything related to interactions or controllers as @Micah1 specifically mentions testing for errors related to controller input. I do actually see APS as being loosely browser dependant, only because browsers do sometimes implement W3C specs slightly differently. In theory browsers should abstract away any hardware specific differences, but in practise I appreciate that will not always be true.

A-Frame currently test's under Firefox, Chrome, and a really minimal Node test on Ubuntu. So with this PR we are near parity with A-Frame. We can eventually implement the Node test, but that I think should be a separate issue, and if we really want to actually support NPM installation. I think you already raised this point previously.

I don't know of any way we can automate tests on physical VR hardware, and I don't have any examples of projects doing this. But we could look at testing on the browsers used by those headsets. For PC VR this will be running on Windows or Linux, and mostly likely Chrome or Firefox, so I think we have good coverage there. For standalone headsets this is more difficult because we don't have the hardware to run the browsers that they have. And at least in the case for the Oculus Quest, Oculus Browser is a closed source extension to Chrome, so even if we were able to simulate the Oculus Quest hardware, we don't have the software. But If you know a way to load Oculus Browser in a CI environment then that would be great 😊