machinekit / machinekit-hal

Universal framework for machine control based on Hardware Abstraction Layer principle
https://www.machinekit.io
Other
107 stars 62 forks source link

Continuous Integration / Continuous Delivery woes #268

Open cerna opened 4 years ago

cerna commented 4 years ago

Tracking progress:

So as an emergency measure the Travis build system was turned back on for the Machinekit-HAL repository. In response to discussion with @luminize - where we agreed that some artifact output from CI run would be nice, so users can download the .deb packages and use dpkg -i command to install - I implemented simple Github Actions workflow in the quick&dirty style as Travis doesn't allow keeping artifacts for later download and it's nice for users to have automated build of packages in theirs forks. Github keeps the build artifacts for 90 days after the fact.

Given the volatile situation with *.machinekit.io server, I think that its state should be conserved and package upload should be resumed into the Packagecloud and Cloudsmith repositories (for redundancy). I can start the upload now or when the package build for Machinekit-CNC is ready (currently Machinekit-CNC repository has no CI for testing and package build), but I think that the time for it is after both Machinekit-HAL and Machinekit-CNC can be installed. I also think that it is a time to drop Jessie support (Jessie will be obsolete in 3 months) and make sure that Machinekit-HAL runs and produces packages on Bullseye.

If I understand it correctly, the reason behind own Jenkins server was the 50 minuter running limit of Travis CI. Well, situation is vastly different today with everybody's mother giving Open-Source Projects free minutes on their cloud services. So how relevant is for example this issue for the current status of the project? Given that CI/CD rework will have to include solving the machinekit/machinekit-hal#195 issue, there is quite big window for any changes. Including the build Dockerfiles will also hopefully solve the current issues with image availability. (Machinekit-HAL currently uses my own DockerHUB account, as there are some missing images in the DovetailAutomata account, which caused machinekit/machinekit-hal#263.)

CI tools of today are also using the container-first approach, for which the current script build_with_docker is poorly suited. This script should be left for home use and the build commands exported to functions that then can be called directly from the container.

There is also provider called Drone Cloud which gives Open-Source projects free build minutes directly on armhf and arm64 hardware. This could be used to test Machinekit-HAL not only on amd64 as it is now but also on ARMs.

Last but not least, there are now cmocka tests in Machinekit-HAL. I am all in for unit testing. However, no tests are actually run now and the build server are not even installing the cmocka debian package. So that will also need to be addressed.

cerna commented 4 years ago

@zultron,

I can't quite tell what you've done. I think I remember there still being Multi-Arch: problems in MK's (and LCNC's) package dependencies in Stretch, where it was impossible to co-install all the required host-arch dependency packages without apt uninstalling required build-arch dependencies. Have you found this isn't true (at least for most architectures)?

I have prepared a branch which illustrates it better - at least I think, tell me if this is not so: Jessie killing.

You can test like this:

docker run -it --rm -v "$(pwd):/mk" debian:buster
apt update
apt-get install build-essential fakeroot devscripts
curl -1sLf 'https://dl.cloudsmith.io/public/machinekit/machinekit/cfg/setup/bash.deb.sh' | bash
dpkg --add-architecture armhf
apt update
cd /mk
debian/configure
mk-build-deps --host-arch=armhf --build-arch=amd64 -ir
scripts/build_source_package false
CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ dpkg-buildpackage -uc -us -t arm-linux-gnueabihf -B

(On Stretch you need to pass the option -d to dpkg-buildpackage.)

For the i686 problem, can you try using the amd64-arch gcc-6 package on i686, and just pass in CFLAGS=-m32 and LDFLAGS=elf_i386 as we do in the current system?

The compilation is not the problem. Problem is the Debian patched binutils package which takes care of the multiarch paths to linked libraries. (Plus the headers problem too.)

I have been trying to build the required package gcc-6-cross in supported version 30 with porting down the required dependencies, but so far without success - and not even on building this package, but so far I failed on building its dependencies.

I will try again - but I need a break.

Also, the command mk-build-deps where ${HOST} != ${BUILD} automagically depends on crossbuild-essential-${HOST}:${BUILD} and will require either dummy or the actual package.

If you can make this work, it would be absolutely fantastic. The /sysroot hacks were never meant to be permanent. They solved the problem of their time, but they're absolutely hideously hairy and ugly, and it will be a relief if we can get rid of them FOREVER.

There is the problem with cerna/machinekit-hal@bc7486cfc8eb521f899bfe9df4fb5b5ea4fb20af - I don't know if yours solution is the regression or if @ArcEye's one is. Any opinion?

zultron commented 4 years ago

There is the problem with cerna/machinekit-hal@bc7486c - I don't know if yours solution is the regression or if @ArcEye's one is. Any opinion?

I'm sorry, I just don't remember that anymore. ;(

I'm probably not going to be able to dive into this any time soon. The py3 port will be my next highest priority, once I get some time.

cerna commented 4 years ago

@zultron,

I'm probably not going to be able to dive into this any time soon. The py3 port will be my next highest priority, once I get some time.

Not a problem, I have a feeling that my problems with this originates from fact that I have been half-arsing Debian packaging till now and I really need to get hot state knowledge to move forward. So I have been reading Debian/Ubuntu manuals.

There is still time to Jessie demise.

But - when you will have one or two free hours - could you create the pull requests in upstream LinuxCNC as described here - either they will float or not, but at least we (I) will know where we (I) stand?

BTW, I think that the don't put executables into *-dev packages originates from the multiarch rules. Looking at Ubuntu or Debian manuals, one can see that they recommend to not include executable files. On the other hand, these are actually python files in Machinekit case, so architecture independent from the get-go and bot documents are talking about libpackage-dev, where the lib will be important part.

zultron commented 4 years ago

But - when you will have one or two free hours - could you create the pull requests in upstream LinuxCNC as described here - either they will float or not, but at least we (I) will know where we (I) stand?

I will add that list to another PR upstream has requested from me, no problem.

BTW, I think that the *don't put executables into -dev packages originates from the multiarch rules. Looking at Ubuntu or Debian manuals, one can see that they recommend to not include executable files. On the other hand, these are actually python files in Machinekit case, so architecture independent from the get-go and bot documents are talking about libpackage-dev, where the lib** will be important part.

That's how I see it. There's no arch dependency for these scripts, and there's plenty of precedent for -dev packages installing executables in /usr/bin. The lib*-dev distinction is a new one for me, +1. Of course none of this matters if inclusion in Debian/Ubuntu isn't a goal....

cerna commented 4 years ago

I will add that list to another PR upstream has requested from me, no problem.

Great :+1: Take it only as a rough sketch. Just though that it is a time to try this and see.

Of course none of this matters if inclusion in Debian/Ubuntu isn't a goal....

On the contrary, I think. Even though I don't agree with many of Debian's policies, I consider them technically apt people and many of their ideas have a merit. And this should not be jettisoned for various reasons:

That being said, I don't consider it important enough change in need of immediate attention and will delegate it to sometime, probably never time.


On another note, now I have a functionaning cross-compilation on Stretch which uses the x86_64linux-gnu-gcc and x86_64-linux-gnu-g++ compilers with passed -m32 flags from standard Debian Stretch repository and crossbuild-essential-i386:amd64 patched so it depends on standard gcc and g++, then binutils-i686-linux-gnu (compiled 2.30-22 standard Debian issue) and the -multilib standard packages from Stretch repository. It makes my skin crawl a little and I don't know if it is good idea to distribute this with other dependencies in the Cloudsmith Machinekit/Machinekit repository, but it compiles and produces right ELF files. (I need to hard pass LD=/usr/bin/i686-linux-gnu-ld as environment variable to ./configure script, which is another How are you doing? from the current Machinekit build-flow.)

(And to redo this with proper cross-toolchain will be easy. Well, when I can build it, that's it.)

Now, I will have to go to my personal storage, bring that one PC which is still i386 and then test it all.

zultron commented 4 years ago

I will add that list to another PR upstream has requested from me, no problem.

Great +1 Take it only as a rough sketch. Just though that it is a time to try this and see.

I stuffed a dozen+ commits into a few PRs for upstream.

Of course none of this matters if inclusion in Debian/Ubuntu isn't a goal....

On the contrary, I think. Even though I don't agree with many of Debian's policies, I consider them technically apt people and many of their ideas have a merit. And this should not be jettisoned for various reasons: [...]

I totally appreciate that. Usually I argue from that side, too. Besides, I'd like to see MK get into Debian some day.

On another note, now I have a functionaning cross-compilation on Stretch which uses the x86_64linux-gnu-gcc and x86_64-linux-gnu-g++ compilers with passed -m32 flags from standard Debian Stretch repository

Excellent!

and crossbuild-essential-i386:amd64 patched so it depends on standard gcc and g++, then binutils-i686-linux-gnu (compiled 2.30-22 standard Debian issue) and the -multilib standard packages from Stretch repository.

Why did you need this?

It makes my skin crawl a little and I don't know if it is good idea to distribute this with other dependencies in the Cloudsmith Machinekit/Machinekit repository, but it compiles and produces right ELF files. (I need to hard pass LD=/usr/bin/i686-linux-gnu-ld as environment variable to ./configure script, which is another How are you doing? from the current Machinekit build-flow.)

If they aren't required to (cross-)build MK from source on someone's workstation, I guess that's OK.

Now, I will have to go to my personal storage, bring that one PC which is still i386 and then test it all.

If you have a spare 64-bit PC laying around, you can install an i386-arch OS on it, no problem. Or just run a VM, or a Docker container!

cerna commented 4 years ago

I stuffed a dozen+ commits into a few PRs for upstream.

Thanks, let's see how it goes down then. Is that everything what you wanted to try to merge (i.e. everything else is Machinekit specific)?

I'd like to see MK get into Debian some day.

Oh, you went from the side here, I see. Well, I don't know how compatible is Machinekit's rolling distribution approach with Debian's snapshot approach. But then there is a precedent with Clang - they have its own Debian repository with latest and greatest and then there is older stable version in Debian proper. It may be worth to start with lesser Debian distribution - there are tens of them.

Why did you need this?

What exactly? The mk-build-deps from devscripts when told to cross compile (mk-build-deps --host-arch=i386 --build-arch=amd64 -ir) automatically depends on crossbuild-essential-${HOST}:${BUILD} - and Debian Stretch doesn't have crossbuild-essential-i386:amd64 in official repository. Building crossbuild-essential-i386:amd64 for Stretch is possible, but it will depend on gcc/g++-i686-linux-gnu (which itself depends on binutils-i686-linux-gnu) - and that is not in Debian Stretch repository either. So I patched it to require gcc/g++ (non-cross-compiler).

To use the -m32 flags, you need *-multilib packages and what they pull along.

This all is to make the Dockerfile bit more clean. But then if I want to use the same tooling for Ubuntu builds, I will have to add ifs to work around the whole Ports nonsense which Ubuntu has going on.

If they aren't required to (cross-)build MK from source on someone's workstation, I guess that's OK.

Problem is that the crossbuild-essential-i386:amd64 will act differently to all others (crossbuild-essential-armhf:amd64 and crossbuild-essential-arm64:amd64). Good for Docker isolated building of Machinekit, big problem if somebody pulls it in and tries to use for something else without knowledge.

I still hope that in week or so I will see what I cannot see now and build the proper cross building toolchain.

If you have a spare 64-bit PC laying around, you can install an i386-arch OS on it, no problem. Or just run a VM, or a Docker container!

Sure, but I am thinking that at least the first test should be a real one.

zultron commented 4 years ago

The mk-build-deps from devscripts when told to cross compile (mk-build-deps --host-arch=i386 --build-arch=amd64 -ir) automatically depends on crossbuild-essential-${HOST}:${BUILD} - and Debian Stretch doesn't have crossbuild-essential-i386:amd64 in official repository. Building crossbuild-essential-i386:amd64 for Stretch is possible, but it will depend on gcc/g++-i686-linux-gnu (which itself depends on binutils-i686-linux-gnu) - and that is not in Debian Stretch repository either. So I patched it to require gcc/g++ (non-cross-compiler).

To use the -m32 flags, you need *-multilib packages and what they pull along.

This all is to make the Dockerfile bit more clean. But then if I want to use the same tooling for Ubuntu builds, I will have to add ifs to work around the whole Ports nonsense which Ubuntu has going on.

Ah ha, thank you. I followed along with your breadcrumbs in a Stretch amd64 container and confirmed everything, saw the missing crossbuild-essential-i386 package. I went down some rabbit holes, and decided it might be easier to create a stub crossbuild-essential-i386 package and manually install anything that would have provided; this at least enables installing the mk-build-deps-generated package:

0001-Initial-commit-of-stub-crossbuild-essential-i386-pac.patch.txt

So now the question is whether standard Stretch/amd64 packages can fill in the blanks. I'm sure I've been down this rabbit hole before, too, but don't remember the result. The amd64-arch binutils should be able to manipulate i386-arch binaries; e.g.:

$ objcopy --help | grep targets
objcopy: supported targets: elf64-x86-64 elf32-i386 elf32-iamcu elf32-x86-64 a.out-i386-linux pei-i386 pei-x86-64 elf64-l1om elf64-k1om elf64-little elf64-big elf32-little elf32-big elf64-littleaarch64 elf64-bigaarch64 elf32-littleaarch64 elf32-bigaarch64 elf32-littlearm elf32-bigarm elf64-alpha ecoff-littlealpha elf32-hppa-linux elf32-hppa elf64-ia64-little elf64-ia64-big pei-ia64 elf32-m32r-linux elf32-m32rle-linux elf32-m68k a.out-m68k-linux coff-m68k versados ieee a.out-zero-big elf32-tradbigmips elf32-tradlittlemips ecoff-bigmips ecoff-littlemips elf32-ntradbigmips elf64-tradbigmips elf32-ntradlittlemips elf64-tradlittlemips elf32-powerpc aixcoff-rs6000 elf32-powerpcle ppcboot elf64-powerpc elf64-powerpcle aixcoff64-rs6000 aix5coff64-rs6000 elf32-s390 elf64-s390 elf32-shbig-linux elf32-sh-linux elf32-sh64-linux elf32-sh64big-linux elf64-sh64-linux elf64-sh64big-linux elf32-sh-fdpic elf32-shbig-fdpic elf32-sparc a.out-sparc-linux elf64-sparc a.out-sunos-big pe-x86-64 pe-bigobj-x86-64 pe-i386 plugin srec symbolsrec verilog tekhex binary ihex

I believe appropriately-named symlinks will tell the binutils programs which host arch to pick, as here:

https://github.com/zultron/mk-cross-builder/blob/master/Dockerfile#L230-L234

Do you think this approach might work? It's a lot more lightweight than patching and rebuilding Debian packages.

cerna commented 4 years ago

Do you think this approach might work? It's a lot more lightweight than patching and rebuilding Debian packages.

Yeah, it should. I am not even sure about the need for the symlinks. The problem I can see is in the line in Makefile where the linked is called directly and it causes a mismatch:

/usr/bin/i686-linux-gnu-ld: Relocatable linking with relocations from format elf32-i386 (objects/modules/machinetalk/msgcomponents/pbmsgs.o) to format elf64-x86-64 (objects/modules/pbmsgs.tmp) is not supported
Makefile:873: recipe for target '../rtlib/modules/pbmsgs.so' failed

That's the reason why I used the binutils-i686-linux-gnu as it solved the issue without needing to change the Makefile (I am sure there is some option to set the target to right value).

When you try to build Machinekit-HAL on Ubuntu 18.04, all you need to do is to build the crossbuild-essential-i386 (without any patching), everything else is already in the Ubuntu repositories. (Version in Ubuntu is similar to the one I am trying to base my patching from.)

(You cannot build on Ubuntu 20.04 - you can guess three times what the problem is. I am sure you will be able to get it on first try.)

[...] I'm sure I've been down this rabbit hole before, too, but don't remembwaser the result[...]

Yeah, as first thing when starting researching this, I discovered dxsbuild - didn't even search for Machinekit solution (I am sure the fact that I am tracked had some impact). But still, it's looking I am pretty late to the party.

cerna commented 4 years ago

OK, no, the symlinks are needed. By adding the -m option to the line $(Q)$(LD) -m elf_i386 -d -r -o $(OBJDIR)/$*.tmp $^ (or just use scripts/build_source_package false && LDEMULATION="elf_i386" CC="gcc -m32" CXX="g++ -m32" dpkg-buildpackage -uc -us -t i686-linux-gnu -B -d) one can compile and link:

lib/libhalcmd.so:              symbolic link to libhalcmd.so.0
lib/libhalcmd.so.0:            ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=9923facf9fdce378fa5219ad3570ce09ea643a08, with debug_info, not stripped
lib/libhal.so:                 symbolic link to libhal.so.0
lib/libhal.so.0:               ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=c48cf3bd26e61af0381e5241a5e8b4eebb6d97e6, with debug_info, not stripped
lib/libhalulapi.so:            symbolic link to libhalulapi.so.0
lib/libhalulapi.so.0:          ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=996dce8a704301c13351c609dc9b2aa31fe7f161, with debug_info, not stripped
lib/liblinuxcncshm.so:         symbolic link to liblinuxcncshm.so.0
lib/liblinuxcncshm.so.0:       ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=8d1eda9a8f70f5bd02ef4cb9e5e966e2802a6fa4, with debug_info, not stripped
lib/libmachinetalk-npb.so:     symbolic link to libmachinetalk-npb.so.0
lib/libmachinetalk-npb.so.0:   ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=211e0bd197bc437764d58eda518530ab2a50cdc6, with debug_info, not stripped
lib/libmachinetalk-pb2++.so:   symbolic link to libmachinetalk-pb2++.so.0
lib/libmachinetalk-pb2++.so.0: ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=7f2069383ee88d75855a9dd565b191050de62bb0, with debug_info, not stripped
lib/libmkini.so:               symbolic link to libmkini.so.0
lib/libmkini.so.0:             ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=7348afeeba552893462a526ded0e9e6805b6cdfe, with debug_info, not stripped
lib/libmtalk.so:               symbolic link to libmtalk.so.0
lib/libmtalk.so.0:             ELF 32-bit LSB pie executable, Intel 80386, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=23a5f2bd4b0377d658ebc277dffbf2513d053b46, with debug_info, not stripped
lib/librtapi_math.so:          symbolic link to librtapi_math.so.0
lib/librtapi_math.so.0:        ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=6eaa7fb7bd36b28fdb214e684daafafb7d01a440, with debug_info, not stripped
lib/python:                    directory

(Any idea why libmtalk.so is GNU/Linux?)

And output of configure (without symlinks):

root@e5c2ced24f3b:/mk/src# ./configure --host=i686-linux-gnu --build=x86_64-linux-gnu
checking for cython... /usr/bin/cython
checking cython version... 0.25.2
checking build toplevel... /mk
checking installation prefix... run in place
checking for grep... /bin/grep
checking for egrep... /bin/egrep
checking for i686-linux-gnu-gcc... no
checking for gcc... gcc
configure: WARNING: using cross tools not prefixed with host triplet
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for gcc... /usr/bin/gcc
checking for i686-linux-gnu-g++... no
checking for i686-linux-gnu-c++... no
checking for i686-linux-gnu-gpp... no
checking for i686-linux-gnu-aCC... no
checking for i686-linux-gnu-CC... no
checking for i686-linux-gnu-cxx... no
checking for i686-linux-gnu-cc++... no
checking for i686-linux-gnu-cl.exe... no
checking for i686-linux-gnu-FCC... no
checking for i686-linux-gnu-KCC... no
checking for i686-linux-gnu-RCC... no
checking for i686-linux-gnu-xlC_r... no
checking for i686-linux-gnu-xlC... no
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C preprocessor... gcc -E
checking for i686-linux-gnu-objcopy... no
checking for objcopy... objcopy
checking for i686-linux-gnu-ld... no
checking for ld... ld
checking build system type... x86_64-pc-linux-gnu
checking host system type... i686-pc-linux-gnu

All this was with environment variables CC="gcc -m32" CXX="g++ -m32" passed to configure or dpkg-buildpackage.

(And I have just noticed that there is LDEMULATION environment variable in original Dockerfile [of course there is...].)

I do think it is little bit ugly, but asking for help on Debian annals for Stretch related questions got me reply to stop using extremely old code-base Stretch or stop asking Stretchy questions - so much for Debian being stable distro with Long Term Support. As I am not going to get more help there, this is probably good enough.


Of course, all this will work only when cross-compiling from amd64 -> i386, not when cross-compiling from arm64 or other architectures. So it doesn't solve 100% the problem @the-snowwhite was asking about.

But oh well... (next time)

zultron commented 4 years ago

Yep, I remembered $LDEMULATION reading your 2nd-to-last comment. Nice going!

Yeah, as first thing when starting researching this, I discovered dxsbuild - didn't even search for Machinekit solution (I am sure the fact that I am tracked had some impact). But still, it's looking I am pretty late to the party.

Ha ha! Yes, dxsbuild might've been the first generation of the cross-build scripts. Believe it or not, the 3rd or 4th generation mk-cross-builder, no matter how ugly, is still far more elegant. Hopefully you're going to clean up 90% of the remaining ugliness with this project.

Maybe it's time to do the Py3 port and drop support for Stretch so you can stop worrying about all of this. :D

gibsonc commented 4 years ago

Maybe it's time to do the Py3 port and drop support for Stretch so you can stop worrying about all of this. :D

I am willing to assist with this to the best of my ability. I learned a lot giving it a go with LCNC. Problem was that there was no structure and everyone was doing their own thing so I lost interest. It also didn't help that I didn't quite know how to do pull requests so ended up with a fork with so many changes that it probably couldn't be used anyway. I seem to remember seeing that LCNC master now has Py3 support, so how do we merge the changes into here in a methodical manner?

The whole Docker concept here really appeals to me, and if we can help to get closer to that goal then I am all in.

zultron commented 4 years ago

Maybe it's time to do the Py3 port and drop support for Stretch so you can stop worrying about all of this. :D

I am willing to assist with this to the best of my ability. I learned a lot giving it a go with LCNC. Problem was that there was no structure and everyone was doing their own thing so I lost interest. It also didn't help that I didn't quite know how to do pull requests so ended up with a fork with so many changes that it probably couldn't be used anyway. I seem to remember seeing that LCNC master now has Py3 support, so how do we merge the changes into here in a methodical manner?

Happily, @rene-dev pulled all those efforts together. I'm pretty sure the emcmodule.cc work should translate over from LCNC pretty easily; doing that from scratch would be the hard part for me. The Cython stuff should Just Work. So if I'm not mistaken, it's just a matter of converting .py files after that. Issue machinekit/machinekit#563 (machinekit/machinekit-hal#114) is about that. I, for one, would very much welcome your help.

The whole Docker concept here really appeals to me, and if we can help to get closer to that goal then I am all in.

I put MK-HAL in Docker containers in other projects I work on, and it's fantastic. I can switch developing between projects that depend on different OS versions without having to have multiple machines and without rebooting my dev system. It also turns out that userland RT threads work just fine inside of containers, so we use containers to distribute software updates in one of my projects.

cerna commented 4 years ago

@gibsonc,

I am willing to assist with this to the best of my ability.

Great! In my books: The more, the merrier.

Problem was that there was no structure and everyone was doing their own thing so I lost interest.

So project management was the problem. Well, I guess in Open-Source it often is. While I wouldn't presume to tell anybody what he should do (people tend to not like it very much and as such it has negative impact on workflow), I think that there are few sets which need doing (this is from somebody who is using Python because everybody else is, not because he particularly likes it):

It also didn't help that I didn't quite know how to do pull requests so ended up with a fork with so many changes that it probably couldn't be used anyway.

I would recommend to start with small pull requests changing few files at a time. Contributing policy of Machinekit states that every pull request which runs tests green has to be merged (well, it can be immediately reversed if it is doing something brain-dead), so there is no discussion about what should and should not be merged from so-called core developers.

I personally like the per-partes approach where the work is done in smallish functioning bites which can be incorporated into the main repository without too much problem for other developers and can generate discussion from other parties. (But not everything can be done this way.)

The whole Docker concept here really appeals to me, and if we can help to get closer to that goal then I am all in.

What exactly (there were discussed multiple cases in this issue)? Building packages/running tests? Using Docker for git hook execution (formatting and such)? Or running the actual application in Docker container?

cerna commented 4 years ago

Believe it or not, the 3rd or 4th generation mk-cross-builder, no matter how ugly, is still far more elegant. Hopefully you're going to clean up 90% of the remaining ugliness with this project.

I do believe you! Have no reason not to. So far I am on about 150 LOC of Dockerfile even with licence header. With --build-args of DEBIAN_DISTRO, DISTRO_VERSION and HOST_ARCHITECTURE. Now I need to rewrite the build scripts, so they are used as a CMD to Docker run and do not call the docker run command themselves. Then hopefully I can use the same logic for Github Actions, Drone Cloud CI, Travis CI and whatever else I will come upon.

You seem pretty level about abandoning something which had surely cost you many hours of effort. It's quite refreshing, but I am not sure I will be able to reciprocate when the time comes. (I have been reading one very similar maillist where even one misunderstood comment about removing non-working functionality has caused major emotional incident. [Not that I want in any way to compare this. It's just my source of funnies.])

Maybe it's time to do the Py3 port and drop support for Stretch so you can stop worrying about all of this. :D

Ergo this. I am all in favour for Python 3 switch (just don't left the Python 2 cruft in), but I would like to keep the Stretch support for little longer. (If it won't cause too much trouble.) :sunglasses:

I have even compulsion to extend the number of supported distros, so the notion to drop something is not exactly up to my alley. The simplest one would probably be the Fedora/RHEL option - given that there is already work done in the deprecated Machinekit/Machinekit repository. But it seems like unnecessary devops task at the moment (as I know nothing about RPM packaging) and from developer point of view it has no advantage - it is still glibc and systemD distro.

BTW, do you have somewhere public the bash scripts we were talking about which should have the similar functionality to fixuid? I am thinking what is easier - to recompile the go application for all Machinekit supported architectures, or rewrite it in something else. Never mind, I investigated and the solution to this in bash or other scripted language which cannot be setuid is too hairy.

cerna commented 4 years ago

Looking through the build scripts in scripts/ and debian/ directories, it looks like there is duplication of functionality: The debian/configure script has flags to prepare both Debian changelog and Debian source tarball, the scripts/build_source_package also prepare both Debian changelog and Debian source tarball - but uses completely different logic. More, it is named build_source_package, but so far it was used to only prepare changelog. In my simple and naïve opinion, having two tools which do the same thing with two different outcomes, is bonkers.

(What I mean to say, is there any historical [or current] reason for this distinction?)

Then there is the scripts/build_docker script which currently builds the packages, build the RIP version, builds the Coverity test, builds documentation (or something like it) and tests the what will become EMC Application.

My proposal is to delete the scripts/build_source_package without replacement and use solely debian/configure. Then expand debian/configure script to encompass the missing functionality from build_source_package - mainly how the changelog is done and which information are stored in it. I, personally, like the idea of version number based on number of commits in branch much better than version number based on time of build, but it will probably need to bump the Machinekit-HAL version number again to 0.4 to avoid the problems with upgrading.

(I am also thinking what else should be in Changelog message.)

Then delete scripts/build_docker and create separate scripts for each of its functionalities under the debian/ folder. Then during build it would be called like:

docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.arm64_10 debian/configure -c
docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.arm64_10 dpkg-buildpackage -us -us -B
or
docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.arm64_10 debian/buildpackages
docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.arm64_10 debian/signpackages
docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.amd64_10 debian/ripruntests
docker run --rm -it -u "$(id -u):$(id -g)" -v "$(pwd):/home/machinekit/build/machinekit-hal" -w "/home/machinekit/build/machinekit-hal" machinekit-hal-debian-builder-v.amd64_10 debian/coverity

and so on.

In the scripts/ folder would only be debian-builder-docker-wrapper used for manual calling of the docker run or podman run or whatever command.

That way there would be minimal configuration which would need passing around during build. (Most things are burned-in the Docker image builder and can be queried at runtime - like DEB_HOST_ARCHITECTURE and so, so most builders will build the right version with only dpkg-buildpackage -uc -us -B and nothing else.)


If anybody knows how to solve sudo permission for users who do not exist in /etc/passwd file in connection with Docker images, I would like to hear it. So far I solved it like this and this - however that solution seems kind of hairy.

(BTW, that Dockerfile/Entrypoint combo is functioning and tests run green. I - at least - think that it is little bit more clean and the output images are also bit smaller at 1-1.5 GB a piece.)

Build with:

#!/bin/bash

ARRAY=(stretch buster)
ARRAY2=(i386 amd64 armhf arm64)

for DEB in "${ARRAY[@]}"
do
    for ARCH in "${ARRAY2[@]}"
    do
        docker build \
        --build-arg DEBIAN_DISTRO_BASE="debian:${DEB}" \
        --build-arg HOST_ARCHITECTURE=${ARCH} \
         -t machinekit-hal-debian-builder-v.${DEB}_${ARCH} -f Dockerfile.new  ${MK_HOME}
    done
done

Then there is the problem, that on Debian Stretch sudo or su doesn't like to run without a connected terminal to Docker, so I will have to investigate how to circumvent this.

(EDIT: Well, and after posting it here, I discovered that it doesn't work on Stretch at all. [Benefits of manual testing.])

zultron commented 4 years ago

Looking through the build scripts in scripts/ and debian/ directories, it looks like there is duplication of functionality: The debian/configure script has flags to prepare both Debian changelog and Debian source tarball, the scripts/build_source_package also prepare both Debian changelog and Debian source tarball - but uses completely different logic. More, it is named build_source_package, but so far it was used to only prepare changelog. In my simple and naïve opinion, having two tools which do the same thing with two different outcomes, is bonkers.

(What I mean to say, is there any historical [or current] reason for this distinction?)

Not bonkers, just one of those things that happens by accident. Definitely only keep one.

Then there is the scripts/build_docker script which currently builds the packages, build the RIP version, builds the Coverity test, builds documentation (or something like it) and tests the what will become EMC Application.

My proposal is to delete the scripts/build_source_package without replacement and use solely debian/configure. Then expand debian/configure script to encompass the missing functionality from build_source_package - mainly how the changelog is done and which information are stored in it. I, personally, like the idea of version number based on number of commits in branch much better than version number based on time of build, but it will probably need to bump the Machinekit-HAL version number again to 0.4 to avoid the problems with upgrading.

Go for it. I personally prefer serial numbers based on CI-provided build numbers, but in the case of Travis CI, I never found a build number that was shared between all the different jobs, so we had to invent other ways to ensure an increasing value that could be identical even when generated by independent jobs. The disadvantage is the extra expense of checking out an unshallow git tree.

Then delete scripts/build_docker and create separate scripts for each of its functionalities under the debian/ folder. Then during build it would be called like:

[...]

and so on.

Go for it.

If anybody knows how to solve sudo permission for users who do not exist in /etc/passwd file in connection with Docker images, I would like to hear it. So far I solved it like this and this - however that solution seems kind of hairy.

That's not too bad. I solved that problem in the past by adding the user to /etc/passwd in the Docker image ENTRYPOINT script. Also hairy.

cerna commented 4 years ago

[...] I solved that problem in the past by adding the user to /etc/passwd in the Docker image ENTRYPOINT script[...]

For this to work you need to run the ENTRYPOINT with elevated privileges. However, when you define the -u "$(id -u):$(id -g)" during docker run, the containerized process will run with normal user (or the user which is defined in Dockerfile under this combination of UID/GID, which is usually none) access rights and you cannot manipulate the /etc/passwd file. It is kind of chicken and eggs situation, really. So you need some kind of setuid application which can be run by anybody.

The sudo or su code has to be different in Stretch release, because no amount of Default: !requiretty in conf files solved the issue when running without terminal and everything is working fine in Buster. So I moved the fixuid binary back, only this time I patched it to do only basic things, compiled it for i386 and armhf in addition to amd64 and amd64 and packaged it for Debian Stretch, Buster and Bullseye and Ubuntu Bionic. It now resides in Machinekit/Machinekit Cloudsmith repository.

Maybe in few years the bash script can make comeback.

(Or am I missing something?)

[...] I personally prefer serial numbers based on CI-provided build numbers[...]

I would like for version numbering to be deterministic and reproducible. So in the case one needs to recompile some older version of Machinekit, it will have predictable name and dpkg tool will act toward it like it is older version like it rightfully is. And leveraging the fact that Machinekit project's repositories cannot have @master branch rewritten, the commit iterator is good match for this.

Not bonkers, just one of those things that happens by accident. Definitely only keep one.

I didn't realize at first, but debian/configure script has a problem, that it is using (and will need to be using, no way around it) tools which are not in default Debian image - like git, bzip2 and lsb_release (LinuxCNC has it even worse, it requires python). So I am solving this by adding debian/bootsrap script which will have to run to completion without requiring any additional packages (I am testing it on official Docker images) on all supported systems - basically it will generate the control file and check for presence of devscripts package.

The libck-dev package is now default dependency for all systems, so can be passed to control.in and I built the python-gtksourceview2 package for Buster, so that can be also passed to control.in. All what is remaining (pretty much) is the Xenomai check.

I think this will simplify the whole process. And also make it more accessible for users who (as we talked) for some reasons don't want to use Docker containers for building/compiling. (Because one no longer will need to prepare environment - Machinekit will tell what it needs.)

[...]It also turns out that userland RT threads work just fine inside of containers, so we use containers to distribute software updates in one of my projects[...]

Any idea if Snaps/Flatpacks will also work?

cerna commented 4 years ago

OK, I have reimplemented the build functionality in basthon 3: commits and surprisingly it seems all to work (package building, image building and Machinekit-HAL testing).

Hopefully it will be cleaner for potential developers and I haven't just spend time reinventing a wheel. I just need to patch the Github Actions and then open pull request.

(Even it is definitely not perfect, it is good enough and while trying to have nice linear history in git, I completely trashed it while rebasing multiple times. So I am little tired of the endless history rewriting. I can change what needs to change later.)


And it looks like Github changed its GrapgQL Packages API, so the current Github Actions workflow will fail. (It was preview and now it should be integrated into the proper specification.) Not sure how long it hasn't been working, but nobody complained, so hopefully not that long.

cerna commented 4 years ago

Pull request #288 removed the temp/temp tagging nonsense, reworked the Docker image building script (so far only local native build, there is an issue with SSL certificates and curl somewhere and I have so far no idea where - it could be the Docker buildx, Debian, QEMU or something else) and changed how the packages are being build to standard Debian Multi-Arch. I am hoping that there will be no problems (of course), but testing on real hardware is testing on real hardware (@the-snowwhite, if I could bother you).

Now onto implementing the Travis CI arm64 build and Drone CI amd64, armhf and arm64 ones.

And mainly actually building and packaging EMCApplication.


BTW, good to know that /home/machinekit/build/machinekit-hal/tests/threads.0 fails non-deterministically even on arm64 runner. @kinsamanka in his #250 has some changes to it. Will have to look if it is not a solution to this problem.

the-snowwhite commented 4 years ago

@cerna Sure, albeit can you a bit more low level verbose state what you need to have tested and how ?

cerna commented 4 years ago

@the-snowwhite, right, sorry...

Well, the code of the application itself was not changed, all that was changed was few minor build recipes and the way how packages are build. When running RIP runtests, the tests are still green. So this should be OK.

What I need to know - in a nutshell - is if you can see some degradation when running the newly built packages in your standard usage cases in comparison to the old ones. I have looked at the binaries, and they all have the correct ELF header, but I could miss some. Also, there was a change in how the Python binaries are named.

So, what I would appreciate is:

the-snowwhite commented 4 years ago

@cerna sorry for my late reply, for some reason I got no notification of your response. I'm not sure if I can test my normal use cases with just a standalone machinekit-hal package. Currently I'm running a cnc router and 3d-printer via machinekit client and qtquickvcp with python style hal configs. These setups rely on files that never (or not just yet) made it into machinekit hal. like this: https://github.com/machinekit/machinekit-cnc/tree/master/lib/python/fdm ... ! I'm not sure if there are other (machinekit-cnc) dependencies, the fdm folder can manually be copied to the hal config folder. my python files have these headers:

import sys
import os
import subprocess
import importlib
from machinekit import launcher
from machinekit import config
from time import *

import os
from machinekit import rtapi as rt
from machinekit import hal
from machinekit import config as c
from fdm.config import velocity_extrusion as ve
from fdm.config import base
from fdm.config import storage
from fdm_local.config import motion
import cramps as hardware

It would be nice to be able to run these setups without machinikit-cnc or l-cnc

the-snowwhite commented 4 years ago

@cerna I attempted running my open builds ox router (mklauncher) config from a fresh debian stretch sd with only the latest machinekit-hal package installed: The result in Machinekit client was:

starting configserver... done
starting machinekit... /bin/sh: 1: machinekit: not found

Culprit is this run file requires machinekit (executable): https://github.com/the-snowwhite/Hm2-soc_FDM/blob/master/Cramps/PY/OX/run.py#L34

cerna commented 4 years ago

@the-snowwhite, thank you for the testings!

Yes, the cut between Machinekit-HAL and Machinekit-CNC and now what will become the EMCApplication is not that clean (pretty bloody actually) and there is definitely work to be done to clean it up. I personally consider the split into smaller repositories as one of the in the set of best decisions (too bad it wasn't done sooner, given that the talk about it goes to the beginning) and wouldn't want for Machinekit-HAL to start growing into Machinekit [original] like repository. That being said, I think that the CNC specific Python modules should go into own git repository with separate package distribution route (while creating clean tree dependency structure between all parts). Basically push out all CNC stuff dependent on other specific CNC parts from Machinekit-HAL into own logical homes.

I quite like what @kinsamanka started in #250 in https://github.com/machinekit/machinekit-hal/blob/df5e884d31f8e2668ad80c1c2be66028a64cc3b4/debian/control.in - putting things into own logical packages for distribution. (I have been getting my CMAKE knowledge into hot, useful state and so far I am liking the modern approach a much better than I last remembered, so transforming the Machinekit-HAL into CMAKE is next logical step on the line CI/CD-Docker builder-CMAKE buildsystem-Package split. [Before any real development can happen, I guess.])

But until that happens, it will be kinda unusable for these kinds of tasks.

cerna commented 4 years ago

@zultron in #293 was bitten by problem stemming from Docker image caching in Github Actins CI - it can happen that Docker image built for one branch can be used for test build workflow of another branch. And because both branches can have fundamental differences, the run fails even though it should not. I realized this problem at the time I was implementing it but given that for my work style it would not cause a problem, I decided to postpone solving it to later date.

So, now I had to think about it, had an aha moment and came up with following design flow (development of pull request in progress):

Hopefully, this workflow will be a lot more immune to cross-branch issues and also solve the issue with force pushing.

BTW, the renaming in #295 left out the Docker image upload jobs and Debian packages upload job, so that should also be shortened to be in-line with the change.

zultron commented 4 years ago

I'm looking at all the amazing CI infrastructure @cerna has put together here to help me get off the ground more quickly with #297, building the EtherLab driver for the same OS and architecture variants that the CI system builds for Machinekit.

After a whole lot of copy'n'paste, I'm starting to wish that some of that work was pulled out of the Machinekit-HAL repo and made independent, since so much of it is reusable, such as the Docker base images, Python container build scripts, JSON distro settings and entrypoint script; that is, almost all of it.

I don't want to expand the scope of this issue, so maybe this goes in a new one, but this would be very welcome if the project ends up with many separate repos all building packages in Docker for the same OS+arch matrix.

zultron commented 4 years ago

[...] it can happen that Docker image built for one branch can be used for test build workflow of another branch. And because both branches can have fundamental differences, the run fails even though it should not. [...]

So, now I had to think about it, had an aha moment and came up with following design flow (development of pull request in progress): [...]

I'm sure you have this handled already, but just in case, for another project, I needed to do something similar.

We wanted to know if a checked out revision matched a Docker image, and pull a new one if not. The scheme I devised was to find a way to generate a reproducible hash of the files used to build the Docker image. The hash would only change when the image input files would change, and never otherwise. This hash was then built into the image tag, although it could have been put into an image label as well, say -l IMAGE_HASH=deadbeef.

So for this application, you'd do something similar: compute the hash from the repo files, query the Docker registry for an image with a matching IMAGE_HASH label, and then either pull the existing image or else build one if none exists.

I can produce the hash generation commands if needed. They're not rocket science, but there are a few gotchas we had to address before they were 100% reliable.

zultron commented 4 years ago

After a whole lot of copy'n'paste, I'm starting to wish that some of that work was pulled out of the Machinekit-HAL repo and made independent, since so much of it is reusable, such as the Docker base images, Python container build scripts, JSON distro settings and entrypoint script; that is, almost all of it.

Here's an example of what they do in a bunch of ROS-Industrial repos. Projects that use it simply check out the repo into their source tree in CI and run the scripts. This is pretty versatile, and works with a bunch of CI systems, a nice touch.

zultron commented 4 years ago

After a whole lot of copy'n'paste, I'm starting to wish that some of that work was pulled out of the Machinekit-HAL repo and made independent, since so much of it is reusable, such as the Docker base images, Python container build scripts, JSON distro settings and entrypoint script; that is, almost all of it.

I've spent quite some hours now with the GH Actions CI config while working on #297, and while not done yet, at least the basic Docker image and package build flows work. One of the things I've done is try to remove bits specific to the MK-HAL repository and make them generic and configurable, in hopes of doing something like the above, separating out a shared CI config.

As I know very well from personal experience, CI configurations for the project are necessarily very complex. @cerna has done a fantastic job building up this new system, as well as vastly simplifying the Docker builders we used to use (which were nasty and hairy, and which I wrote!). Starting from the MK-HAL configuration for #297 has saved me unknown dozens of hours, since I could copy and paste 90% and make it work with only minor changes. I'm really pleased with it, so please keep that in mind even as I propose improvements below!

As it is now, there is a LOT of logic built into the workflow file in the form of embedded shell scripts. It's likely my own deficiency that these turn out to be quite delicate for me, and going through repeated iterations of pushing changes to GH, waiting several for the Actions run, and going back to fix problems has been frustrating and time-consuming.

If the CI scripts were able to run stand-alone in a local dev environment, these iterations could be drastically shortened by being able to run individual steps independently, without having to queue up the entire workflow in GH Actions. The basic workflow could stay the same, with the GH Actions workflow keeping the same general structure and maintaining the same use of output parameters for carrying configuration data between jobs, encrypted secrets for access credentials, Docker registries for caching images, etc. There are already Python scripts used to build the container and packages, so it makes sense to convert workflow file shell script logic into Python; then, the differences between running the workflow in a local dev environment vs. the GH Actions environment could be encapsulated using object-oriented programming. In the same way, the workflow could be adapted to other CI systems, should the need arise, and of course the workflow can be shared between MK-HAL, the EtherLab Master and Linuxcnc-EtherCAT HAL driver repositories (and potentially the MK-EMC, though it already has a CI configuration), and improvements will benefit all repos.

Does the problem description make sense, and does the proposal sound reasonable?

cerna commented 4 years ago

The truth to be told is that I wasn't thinking about using this outside Machinekit-HAL. Of course, I am not saying that it is not possible. It is possible. But some analysis of common requirements and processes across all repositories or projects where it could be used will be needed.

Here's an example of what they do in a bunch of ROS-Industrial repos. Projects that use it simply check out the repo into their source tree in CI and run the scripts. This is pretty versatile, and works with a bunch of CI systems, a nice touch.

This looks like a warehouse for commonly used scripts. Well, one can do something similar to this in Github Actions with own actions. These can be written in Typescript, or can be a Docker container (and then use practically any language possible) or newly can be a script. The best part - and really the only part which make it specific - is that the Input/Output is already solved. Drone has something similar with its modules, but not so versatile, I would say. I don't know about Travis, I don't think so as the whole ecosystem is not pluggable and modular (I think that they will try to introduce similar concepts to stay in the game and relevant, but it surely won't be overnight). And - at least in documentation portions I have read so far - talk about simple bash scripts. (But at least they introduced the workspaces - something like immutable shared drive that following jobs can use.)

The Python scripts are nothing to write about, but they sure could be abstracted to some classes and then the repository would have only some basic settings Python object (or configuration JSON) which would specify the labels or arguments send to the Docker builder. (I am also afraid if this would not start to reimplement some already existing project - isn't there something readily available and usable already?)

Problem also is that the Drone and Travis CI are both somewhat working but actually in pretty terrible conditions. Turns out the GitHub Actions are pretty advanced (even through I originally thought otherwise). And to enable the same functionality on both Drone and Travis, one has to use local git hooks managers with (probably Dockerized) running environment - because only Github Actions (and Azure, as far as I know) allow to dynamically create and alter the recipes at runtime (up to a point, obviously). So I put is at back-burner (because for now it is good enough and there is the #200 and so).

Nice thing about that repository - That thing has more pull requests than the whole Machinekit-HAL. If Machinekit implemented something similar, should that presume that the Machinekit-HAL would be integral for all, in other words that it would be building and testing targets dependent on the Machinekit-HAL or do you want more universal system/approach which would not be dependent on Machinekit in any way?

Because this looks like it is targeted at modules.

cerna commented 4 years ago

Travis CI integration in #299 is behaving oddly - for example job 108.5 failed, the log says that it failed, but the whole job is green, i.e. it passed.

Basically first four jobs, which are the build RIP and run runtests ones, failed (as they should), but the Debian packages building ones all passed - even though they should have failed too.

This means that some part of the bash script is eating up the error code and the Travis runner gets 0 in cases where it should have got something else.

zultron commented 4 years ago

Travis CI integration in #299 is behaving oddly - for example job 108.5 failed, the log says that it failed, but the whole job is green, i.e. it passed.

That's because the script needs to either set -e or else replace the ; characters with &&. Similar problem as in #293.

https://github.com/rene-dev/machinekit-hal/blob/python3/.travis.yml#L144-L163

cerna commented 4 years ago

That's because the script needs to either set -e or else replace the ; characters with &&. Similar problem as in #293.

Yup, I did it at the same (similar) time. I am not surprised I fucked both the same way. I will create a PR later today.

cerna commented 4 years ago

@lskillen, I am having trouble with the automatic installation script on Debian Bullseye in Docker container. Have you encountered similar problem?

The simplest way how to reproduce this (most important information is at the end):

mars@mars:~/Downloads$ docker run -it --rm debian:bullseye
root@9e2946ef8be8:/# apt update
Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 Packages [7675 kB]
Fetched 7791 kB in 4s (1746 kB/s)   
Reading package lists... Done
Building dependency tree       
Reading state information... Done
22 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@9e2946ef8be8:/# apt install curl
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  ca-certificates krb5-locales libbrotli1 libcurl4 libgssapi-krb5-2 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common libnghttp2-14 libpsl5 librtmp1 libsasl2-2
  libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 openssl publicsuffix
Suggested packages:
  krb5-doc krb5-user libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
The following NEW packages will be installed:
  ca-certificates curl krb5-locales libbrotli1 libcurl4 libgssapi-krb5-2 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common libnghttp2-14 libpsl5 librtmp1
  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 openssl publicsuffix
0 upgraded, 22 newly installed, 0 to remove and 22 not upgraded.
Need to get 5243 kB of archives.
After this operation, 12.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://deb.debian.org/debian bullseye/main amd64 krb5-locales all 1.17-10 [94.6 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 libssl1.1 amd64 1.1.1g-1 [1543 kB]
Get:3 http://deb.debian.org/debian bullseye/main amd64 openssl amd64 1.1.1g-1 [846 kB]
Get:4 http://deb.debian.org/debian bullseye/main amd64 ca-certificates all 20200601 [158 kB]
Get:5 http://deb.debian.org/debian bullseye/main amd64 libbrotli1 amd64 1.0.7-7 [267 kB]
Get:6 http://deb.debian.org/debian bullseye/main amd64 libkrb5support0 amd64 1.17-10 [64.6 kB]
Get:7 http://deb.debian.org/debian bullseye/main amd64 libk5crypto3 amd64 1.17-10 [115 kB]
Get:8 http://deb.debian.org/debian bullseye/main amd64 libkeyutils1 amd64 1.6.1-2 [15.4 kB]
Get:9 http://deb.debian.org/debian bullseye/main amd64 libkrb5-3 amd64 1.17-10 [366 kB]
Get:10 http://deb.debian.org/debian bullseye/main amd64 libgssapi-krb5-2 amd64 1.17-10 [156 kB]
Get:11 http://deb.debian.org/debian bullseye/main amd64 libsasl2-modules-db amd64 2.1.27+dfsg-2 [69.0 kB]
Get:12 http://deb.debian.org/debian bullseye/main amd64 libsasl2-2 amd64 2.1.27+dfsg-2 [106 kB]
Get:13 http://deb.debian.org/debian bullseye/main amd64 libldap-common all 2.4.50+dfsg-1 [92.9 kB]
Get:14 http://deb.debian.org/debian bullseye/main amd64 libldap-2.4-2 amd64 2.4.50+dfsg-1+b1 [228 kB]
Get:15 http://deb.debian.org/debian bullseye/main amd64 libnghttp2-14 amd64 1.41.0-3 [74.0 kB]
Get:16 http://deb.debian.org/debian bullseye/main amd64 libpsl5 amd64 0.21.0-1.1 [55.3 kB]
Get:17 http://deb.debian.org/debian bullseye/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d.1-2+b2 [60.8 kB]
Get:18 http://deb.debian.org/debian bullseye/main amd64 libssh2-1 amd64 1.8.0-2.1 [140 kB]
Get:19 http://deb.debian.org/debian bullseye/main amd64 libcurl4 amd64 7.68.0-1+b1 [322 kB]
Get:20 http://deb.debian.org/debian bullseye/main amd64 curl amd64 7.68.0-1+b1 [249 kB]
Get:21 http://deb.debian.org/debian bullseye/main amd64 libsasl2-modules amd64 2.1.27+dfsg-2 [104 kB]
Get:22 http://deb.debian.org/debian bullseye/main amd64 publicsuffix all 20200729.1725-1 [118 kB]
Fetched 5243 kB in 3s (1829 kB/s)     
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package krb5-locales.
(Reading database ... 6760 files and directories currently installed.)
Preparing to unpack .../00-krb5-locales_1.17-10_all.deb ...
Unpacking krb5-locales (1.17-10) ...
Selecting previously unselected package libssl1.1:amd64.
Preparing to unpack .../01-libssl1.1_1.1.1g-1_amd64.deb ...
Unpacking libssl1.1:amd64 (1.1.1g-1) ...
Selecting previously unselected package openssl.
Preparing to unpack .../02-openssl_1.1.1g-1_amd64.deb ...
Unpacking openssl (1.1.1g-1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../03-ca-certificates_20200601_all.deb ...
Unpacking ca-certificates (20200601) ...
Selecting previously unselected package libbrotli1:amd64.
Preparing to unpack .../04-libbrotli1_1.0.7-7_amd64.deb ...
Unpacking libbrotli1:amd64 (1.0.7-7) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../05-libkrb5support0_1.17-10_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.17-10) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../06-libk5crypto3_1.17-10_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.17-10) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../07-libkeyutils1_1.6.1-2_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.6.1-2) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../08-libkrb5-3_1.17-10_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.17-10) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../09-libgssapi-krb5-2_1.17-10_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.17-10) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../10-libsasl2-modules-db_2.1.27+dfsg-2_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.27+dfsg-2) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../11-libsasl2-2_2.1.27+dfsg-2_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.27+dfsg-2) ...
Selecting previously unselected package libldap-common.
Preparing to unpack .../12-libldap-common_2.4.50+dfsg-1_all.deb ...
Unpacking libldap-common (2.4.50+dfsg-1) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../13-libldap-2.4-2_2.4.50+dfsg-1+b1_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.50+dfsg-1+b1) ...
Selecting previously unselected package libnghttp2-14:amd64.
Preparing to unpack .../14-libnghttp2-14_1.41.0-3_amd64.deb ...
Unpacking libnghttp2-14:amd64 (1.41.0-3) ...
Selecting previously unselected package libpsl5:amd64.
Preparing to unpack .../15-libpsl5_0.21.0-1.1_amd64.deb ...
Unpacking libpsl5:amd64 (0.21.0-1.1) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../16-librtmp1_2.4+20151223.gitfa8646d.1-2+b2_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2+b2) ...
Selecting previously unselected package libssh2-1:amd64.
Preparing to unpack .../17-libssh2-1_1.8.0-2.1_amd64.deb ...
Unpacking libssh2-1:amd64 (1.8.0-2.1) ...
Selecting previously unselected package libcurl4:amd64.
Preparing to unpack .../18-libcurl4_7.68.0-1+b1_amd64.deb ...
Unpacking libcurl4:amd64 (7.68.0-1+b1) ...
Selecting previously unselected package curl.
Preparing to unpack .../19-curl_7.68.0-1+b1_amd64.deb ...
Unpacking curl (7.68.0-1+b1) ...
Selecting previously unselected package libsasl2-modules:amd64.
Preparing to unpack .../20-libsasl2-modules_2.1.27+dfsg-2_amd64.deb ...
Unpacking libsasl2-modules:amd64 (2.1.27+dfsg-2) ...
Selecting previously unselected package publicsuffix.
Preparing to unpack .../21-publicsuffix_20200729.1725-1_all.deb ...
Unpacking publicsuffix (20200729.1725-1) ...
Setting up libkeyutils1:amd64 (1.6.1-2) ...
Setting up libpsl5:amd64 (0.21.0-1.1) ...
Setting up libssl1.1:amd64 (1.1.1g-1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.30.3 /usr/local/share/perl/5.30.3 /usr/lib/x86_64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up libbrotli1:amd64 (1.0.7-7) ...
Setting up libsasl2-modules:amd64 (2.1.27+dfsg-2) ...
Setting up libnghttp2-14:amd64 (1.41.0-3) ...
Setting up krb5-locales (1.17-10) ...
Setting up libldap-common (2.4.50+dfsg-1) ...
Setting up libkrb5support0:amd64 (1.17-10) ...
Setting up libsasl2-modules-db:amd64 (2.1.27+dfsg-2) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2+b2) ...
Setting up libk5crypto3:amd64 (1.17-10) ...
Setting up libsasl2-2:amd64 (2.1.27+dfsg-2) ...
Setting up libssh2-1:amd64 (1.8.0-2.1) ...
Setting up libkrb5-3:amd64 (1.17-10) ...
Setting up openssl (1.1.1g-1) ...
Setting up publicsuffix (20200729.1725-1) ...
Setting up libldap-2.4-2:amd64 (2.4.50+dfsg-1+b1) ...
Setting up ca-certificates (20200601) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.30.3 /usr/local/share/perl/5.30.3 /usr/lib/x86_64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Updating certificates in /etc/ssl/certs...
126 added, 0 removed; done.
Setting up libgssapi-krb5-2:amd64 (1.17-10) ...
Setting up libcurl4:amd64 (7.68.0-1+b1) ...
Setting up curl (7.68.0-1+b1) ...
Processing triggers for libc-bin (2.31-2) ...
Processing triggers for ca-certificates (20200601) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
root@9e2946ef8be8:/# curl -1sLf \
>   'https://dl.cloudsmith.io/public/machinekit/machinekit/cfg/setup/bash.deb.sh' \
>   | bash
Executing the  setup script for the 'machinekit/machinekit' repository ...

   OK: Checking for required executable 'curl' ...
   OK: Checking for required executable 'apt-get' ...
   OK: Detecting your OS distribution and release using system methods ...
 ^^^^: OS detected as: debian  ()
 FAIL: Checking for apt dependency 'apt-transport-https' ...
   OK: Updating apt repository metadata cache ...
   OK: Attempting to install 'apt-transport-https' ...
 FAIL: Checking for apt dependency 'gnupg' ...
   OK: Attempting to install 'gnupg' ...
   OK: Importing 'machinekit/machinekit' repository GPG key into apt ...
 FAIL: Checking if upstream install config is OK ...
 >>>>: Failed to fetch configuration for your OS distribution release/version.
 >>>>: It looks like we don't currently support your distribution release and
 >>>>: version. This is something that we can fix by adding it to our list of
 >>>>: supported versions (see contact us below), or you can manually override
 >>>>: the values below to an equivalent distribution that we do support:
 >>>>: Here is what *was* detected/provided for your distribution:
 >>>>:
 >>>>:   distro:   'debian'
 >>>>:   version:  ''
 >>>>:   codename: ''
 >>>>:   arch:     'x86_64'
 >>>>:
 >>>>: You can force this script to use a particular value by specifying distro,
 >>>>: version, or codename via environment variable. E.g., to specify a distro
 >>>>: such as Ubuntu/Xenial (16.04), use the following:
 >>>>:
 >>>>:   <curl command> | distro=ubuntu version=16.04 codename=xenial sudo bash
 >>>>:
 >>>>: You can contact us at Cloudsmith (support@cloudsmith.io) for further assistance.

root@9e2946ef8be8:/# 
cerna commented 4 years ago

Looking again at the Travis CI configuration (which is now in very precocious state and in need of rework), I started looking in the remote API, specifically starting a new build by sending some data to remote endpoint as described at Triggering builds documentation.

In practical terms, it would mean two "job" or "builds" per every git push or opening pull request. The first one (which would be specified in the .travis.yml file in root of given repository) would create the build config from debian-distro-settings.json or other well-known sources and trigger the second job through the API. That way one can get the same functionality (hopefully [famous last words]) as the current Github Actions workflow (which I take as a muster).

Travis CI supports (I don't know for how long, but they are calling it still beta version, so probably not that long) Importing Shared Build Configuration, which is something to investigate, if Machinekit organization will go with Machinekit-CI separate repository.

The structure of debian-distro-settings.json will also have to change to encompass the cross-building capability, in other words from which BUILD architecture given HOST can be build. (This is actually based on the available packages in Debian repositories and the fact that so far Machinekit's projects are gcc based, for Clang it would be different.)

Also, what must be decided is if Machinekit wants to test 32bit versions on 64bit platform (where possible without the use of QEMU). That is testing i386 on amd64 machines and armhf on most amd64 servers (not all arm64v8 processors suppots the arm32 instruction set). Because that will also need some form of representation in debian-distro-settings.json and changes in build and test scripts.

cerna commented 3 years ago

Well, Travis CI is slowly but surely going away as a hub for Open Source continuous integration. On 2nd of November, they stopped offering unlimited machine time for OSS and instead replaced it with one time trial offer - after this allotment is gone, you are done. (Well, there is some backdoor for OSS to ask for additional minutes, but it is on per-request basis and projects are actually turned down recently.) So, this is the end.

Machinekit as of now has about half of the trial minutes left.

Too bad that the Graviton2 based VM were quite good and there is no alternative for it as of now.

I am going to limit the Travis builds to only testing of the arm64 platform to preserve the minutes.