Closed Emoun closed 4 years ago
Ok, interesting idea. This looks like a heavyweight regression test.
Martin
On 28 Oct, 2019, at 8:29, Emad Jacob Maroun notifications@github.com wrote:
The current regression test setup uses the Helena machine at DTU to run tests every night. This should ensure that the Patmos tool chain works as expected. However, as t-crest/patmos#54 https://github.com/t-crest/patmos/pull/54 has shown, the current setup does not test whether everything works on an up-to-date machine. Helena runs sbt <v1.3, while updated machines will get v1.3.3. The Patmos emulator cannot compile on sbt >v1.3, which means everything compiled fine in the nightly regression test, however any new machine would get an error.
It is advantageous to test whether the Patmos project's setup guides actually work using a fresh Ubuntu/Mac OSX installation. If we had such setup testing, we would have noticed that a new version of sbt would cause future collaborators to get errors.
Therefore, I propose that we extend the regression test to include all tool setup steps. This can be done with Docker, where a fresh Ubuntu image is first built (ensuring the most up-to-date tools are used), and then used to run the regression test. This will ensure that in the future, if an update to a tool causes errors in the Patmos tool chain, we will notice it and can fix it quickly.
I will look into implementing this for Ubuntu first. When that is up and running, I'll look at including setup testing of Mac OSX. Windows shouldn't require dedicated testing, as the WSL ensures that if it works on Ubuntu, it will work on Windows.
Any thoughts?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/t-crest/patmos-misc/issues/10?email_source=notifications&email_token=AAE63GBIWXD7HBCEKACKOL3QQ4ANPA5CNFSM4JF32VRKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HUZROAQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE63GF3IH5VEENS2M44VVTQQ4ANPANCNFSM4JF32VRA.
I think we should switch (or first add) CI testing with Travis. It should be free for open source projects. However, the build time for the compiler is probably prohibitive long.
I was thinking the same thing so I'm already planning to look into it.
I gave it a first try, but gold did not build. Maybe some Linux packages are missing. Do you get emails from the Travis build?
https://travis-ci.org/t-crest/patmos-misc https://travis-ci.org/t-crest/patmos-misc
Cheers, Martin
On 13 Nov, 2019, at 0:56, Emad Jacob Maroun notifications@github.com wrote:
I was thinking the same thing so I'm already planning to look into it.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/t-crest/patmos-misc/issues/10?email_source=notifications&email_token=AAE63GGR6KYZJI5AUHSGVMDQTO6NBA5CNFSM4JF32VRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED5L7LY#issuecomment-553303983, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE63GATU3IFMW2PQLKRHITQTO6NBANCNFSM4JF32VRA.
Do you get emails from the Travis build?
No, since Travis by default only notifies authors and committers of failures. However, this can be overwritten: https://docs.travis-ci.com/user/notifications#configuring-email-notifications
My initial idea is to still use Docker, and then let Travis run it. This is easier than trying to make Travis configure itself correctly.
On 13 Nov, 2019, at 2:49, Emad Jacob Maroun notifications@github.com wrote:
Do you get emails from the Travis build?
No, since Travis by default only notifies authors and committers of failures. However, this can be overwritten: https://docs.travis-ci.com/user/notifications#configuring-email-notifications https://docs.travis-ci.com/user/notifications#configuring-email-notifications
Ok. then please feel free to add yourself to the notification. I was just surprised as I got email notification from a different project where I did not setup anything.
My initial idea is to still use Docker, and then let Travis run it. This is easier than trying to make Travis configure itself correctly.
That’s probably the better way to go. I have not yet tried out what packages we need to install before we run our build. We could do simply throw all in what we have documented. I also see that Travis also has support for macOS and Windows (and different versions of Ubuntu). That would be interesting for your case you brought up where the environment matters for a broken build.
Good that we are moving in this important testing and CI topic.
Cheers, Martin
An update:
I got the regression test to run using docker, which fits with the initial goal of this PR. I then tried to make it run on Travis but hit a problem I don't think we can solve: Travis only allows builds to run for up to 50 minutes. This is a hard requirement; any build running for longer will be shut down forcefully. It doesn't seem to me that there is any way around this. This is a problem for us, as the nightly test runs for just under 2 hours, and I don't think there is any way to get it to less that 1 hour (it takes around an hour to just finish compiling LLVM).
We have a few options now:
The first is the easiest, though not completely ready yet, as I have neglected to look into how to get docker to send e-mails (since I was hoping to use Travis for that). Also, it seems to me when running docker locally, some manual maintenance would be continually required (my docker installation frequently runs out of memory, requiring me to manually initiate a cleanup).
The second is much more involved, but also the solution I would prefer long-term. It's complicated to make work, as we would need to begin hosting release binaries to reduce the build time of each repo. Though, this is something I think we should eventually do anyway: I don't think users of Patmos should be required to build their own compiler and tool chain (we are not the linux kernel after all).
I'm going to try out the first solution first and see if I can easily get it to work reliably. What do you think ?
Hi Emad,
yes, I was afraid, that our build is simply to complex for such a service. I am wondering how other “real” projects do.
Yes, we can switch to Docker based tests on Helena if you want to.
Splitting is probably not so easy. As you mention that just LLVM compilation takes so long. I think the only way to have CI is to use our own server. Some years ago we had a setup (at TU Vienna) with a build bot that built whenever there was a commit. But this configuration and knowledge about got lost when the PhD students form Vienna left :-(
Cheers, Martin
On 27 Nov, 2019, at 16:50, Emad Jacob Maroun notifications@github.com wrote:
An update:
I got the regression test to run using docker, which fits with the initial goal of this PR. I then tried to make it run on Travis but hit a problem I don't think we can solve: Travis only allows builds to run for up to 50 minutes. This is a hard requirement; any build running for longer will be shut down forcefully. It doesn't seem to me that there is any way around this. This is a problem for us, as the nightly test runs for just under 2 hours, and I don't think there is any way to get it to less that 1 hour (it takes around an hour to just finish compiling LLVM).
We have a few options now:
Forget about Travis for now and simply make Helena run the Docker-based test. Split up the nightly test into its independent components, where each repo has its own nightly test run by Travis. The first is the easiest, though not completely ready yet, as I have neglected to look into how to get docker to send e-mails (since I was hoping to use Travis for that). Also, it seems to me when running docker locally, some manual maintenance would be continually required (my docker installation frequently runs out of memory, requiring me to manually initiate a cleanup).
The second is much more involved, but also the solution I would prefer long-term. It's complicated to make work, as we would need to begin hosting release binaries to reduce the build time of each repo. Though, this is something I think we should eventually do anyway: I don't think users of Patmos should be required to build their own compiler and tool chain (we are not the linux kernel after all).
I'm going to try out the first solution first and see if I can easily get it to work reliably. What do you think ?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/t-crest/patmos-misc/issues/10?email_source=notifications&email_token=AAE63GHUNFUDKKRTVIUO2WLQV2JLFA5CNFSM4JF32VRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJ5YDI#issuecomment-559143949, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE63GA2BYQSQKI5DLIKOFLQV2JLFANCNFSM4JF32VRA.
Closing this issue, as I'm implementing a different way of running tests that use Travis but not docker. See this PR
The current regression test setup uses the Helena machine at DTU to run tests every night. This should ensure that the Patmos tool chain works as expected. However, as t-crest/patmos#54 has shown, the current setup does not test whether everything works on an up-to-date machine. Helena runs sbt <v1.3, while updated machines will get v1.3.3. The Patmos emulator cannot compile on sbt >v1.3, which means everything compiled fine in the nightly regression test, however any new machine would get an error.
It is advantageous to test whether the Patmos project's setup guides actually work using a fresh Ubuntu/Mac OSX installation. If we had such setup testing, we would have noticed that a new version of sbt would cause future collaborators to get errors.
Therefore, I propose that we extend the regression test to include all tool setup steps. This can be done with Docker, where a fresh Ubuntu image is first built (ensuring the most up-to-date tools are used), and then used to run the regression test. This will ensure that in the future, if an update to a tool causes errors in the Patmos tool chain, we will notice it and can fix it quickly.
I will look into implementing this for Ubuntu first. When that is up and running, I'll look at including setup testing of Mac OSX. Windows shouldn't require dedicated testing, as the WSL ensures that if it works on Ubuntu, it will work on Windows.
Any thoughts?