Open savageautomate opened 4 years ago
For reference, the correct flags to gcc seem to be -march=armv6+fp -mfloat-abi=hard -mfpu=vfp -marm. That final -marm seems to be because by default the compiler emits thumb mode code and that doesn't play well with VFP on the v6 architecture.
Investigating further it appears that while gcc is correctly build v6 code, the support libraries are built with the default v7 code. This would mean that the .o files are OK but some some reason v7 code gets introduced when the .so is created.
If we're linking in pigpio at the same time (are we?) then that also needs to be v6 compiled.
Investigating further it appears that while gcc is correctly build v6 code, the support libraries are built with the default v7 code. This would mean that the .o files are OK but some some reason v7 code gets introduced when the .so is created.
If we're linking in pigpio at the same time (are we?) then that also needs to be v6 compiled.
Yes we link against dynamic library libpigpio.so
. I don't believe there is any static linking. We do compile libpigpio.so
as part of the build ... but only if it's not detected/previously compiled, so I bet that part of the build also needs these flags.
You would probably needs to add CFLAGS=$CARGS
here as well to build PIGPIO with the same options:
https://github.com/Pi4J/pi4j-v2/blob/75e1909006ef8fae8ad02d44f48096b368c634fd/libraries/pi4j-library-pigpio/src/main/native/build-libpigpio.sh#L62-L69
You should be able to run a maven clean
which would force the PIGPIO library to get rebuilt.
mvn clean install -Pnative
If all this works by just adding the needed compiler flags, that would be terrific and a very simple fix!
Thanks, Robert
I started testing these flags on my side and got compile errors.
I think the reason it compiles with the flags you suggested (-march=armv6+fp -mfloat-abi=hard -mfpu=vfp -marm
) for you is that you are compiling directly on a Raspberry Pi. When I compile with the Docker image which uses ARM cross-compiler toolchains pulled from the APT repository from Ubuntu, I get this:
[INFO] [exec] arm-linux-gnueabihf-gcc: note: valid arguments to '-march=' are: armv2 armv2a armv3 armv3m armv4 armv4t armv5 armv5e armv5t armv5te armv5tej armv6 armv6-m armv6j armv6k armv6kz armv6s-m armv6t2 armv6z armv6zk armv7 armv7-a armv7-m armv7-r armv7e-m armv7ve armv8-a armv8-a+crc armv8-m.base armv8-m.main armv8-m.main+dsp armv8.1-a armv8.2-a armv8.2-a+dotprod armv8.2-a+fp16 armv8.2-a+fp16+dotprod iwmmxt iwmmxt2 native; did you mean 'armv6'?
[INFO] [exec] make: *** [com_pi4j_library_pigpio_internal_PIGPIO.o] Error 1
So, I think I'm still going to have to investigate alternative toolchains for the build.
I was able to get these compiler flags working. "-march=armv6 -mfloat-abi=hard -mfpu=vfp -marm
"
You can test it from this branch. if you like: https://github.com/Pi4J/pi4j-v2/tree/issue/%2328
I have not tested on hardware. I don't have the hardware readily accessible at the moment.
I've tried that branch and it doesn't work. The issue is in the function register_tm_clones which is part of the C++ runtime support that appears to be burnt into the .so file when its' created. The compiler is just using the system one which, for 32-bit arm, assume v7.
Some history here. Debian, from whence we get the compile, does not support ARM v6 especially with hard FP. This is why the very first Pi OS was debian with soft FP as that used to be supported. Raspbian was big undertaking to rebuild the universe based on a v6 architecture with hardware floating point. Because the cross-compiler's libraries assume v7 is all goes wrong.
So, it looks like the only way this'll work is to use the cross compiler in docker that was referenced from wherever it was.
The GCC compiler is pretty old in the RaspberryPi Tools repo that I was originally using. So maybe first we can try the one published here and see if they work for ARMv6+FP.
I'll try to get this working soon.
@hackerjimbo
Well, been struggling with this all day. I believe I have it working using the original RaspberryPi Tools cross compiler toolchains when building with Docker ... but only on x86_64
machines. This cross-compiler toolchain does not support running on ARM. I also have not found a toolchain to run on aarch64
that will compile for ARMv6
.
Do you have an x86_64
machine you can build from to test the ARMv6 compatibility?
mvn clean install -Pnative,docker
Thanks, Robert
@savageautomate
Is there a specific reason you used the Raspberry Pi Tools rather than the other cross-compiler at https://github.com/Pro/raspi-toolchain ? Also, I'll have a go with both on an aarch64 to see what happens. I'm surprised the RaspberryPi Tools one didn't work on aarch64.
How does the tool chain know which docker image to use and how to power it? I'll have to investigate!
OK, just figured it out. I'll have a play and get back to you. I'd like to be able to have the while thing Pi-hosted, even if it means insisting on the 64-bit OS. It perhaps would be nice to be able to build on 32-bit in the knowledge that it could only make the 32-bit version.
@hackerjimbo ,
Is there a specific reason you used the Raspberry Pi Tools rather than the other cross-compiler at https://github.com/Pro/raspi-toolchain ?
Well, I really just wanted to get it working again with a known working compiler and the try to update it to a newer version. I did look into this one and it does not provide a armhf
(armv6) cross-compiler for the aarch64
platform. At least not as far as I could tell reading the documentation.
Also, I'll have a go with both on an aarch64 to see what happens. I'm surprised the RaspberryPi Tools one didn't work on aarch64.
The RaspberryPi Tools seems to only include build tools compiled for x86
and x64
binaries. So the trick will be to figure out a way to build for ARMv6
on the aarch64
platform. That's where I'm stuck at the moment. I'm sure we could build the toolchain ourselves, but that is a bit more than I was trying to take on at the moment.
How does the tool chain know which docker image to use and how to power it? I'll have to investigate!
Well, I publish Docker images for both linux/amd64
and linux/arm64
platforms. These Docker images were built installing the cross-compiling toolchains gcc-aarch64-linux-gnu
and gcc-arm-linux-gnueabihf
packages using apt-get
from their respective repositories. When your local Docker attempts to get the Pi4J Docker Builder image from the DockerHub repo, it will pull the correct one based on the host's architecture. (some Docker magic)
Our Maven build scripts support the -pdocker
profile which will call the build-docker.sh
script in the native project sources. The docker
profile is enabled by default on Windows and MacOS, but not currently on Linux systems.
I'd like to be able to have the while thing Pi-hosted, even if it means insisting on the 64-bit OS.
Me too.
It perhaps would be nice to be able to build on 32-bit in the knowledge that it could only make the 32-bit version.
We could with some refactoring of the build logic. At the moment we could build for armhf
on a 32-bit Pi OS and aarch64
on a 64-bit Pi OS. The trouble comes from trying to build both and satisfy all the build artifacts that are intended to get embedded in the JAR file.
You're a star @savageautomate! The new library works fine on my Pi Zero W. Well, almost: we're back to the PIGPIO ERROR: INVALID I2C PAYLOAD DATA LENGTH [128]; Valid range: 0-32
but that's been fixed in another branch.
I'll celebrate by seeing if I can create an aarch64 docker image that can compile for v6.
@hackerjimbo ,
I'll celebrate by seeing if I can create an aarch64 docker image that can compile for v6.
If you can just work out the steps to getting a working armhf
for ARMv6 toolchain running on an aarch64
then I can create the Docker container, or integrate the steps into my existing container image.
The new library works fine on my Pi Zero W. Well, almost: we're back to the PIGPIO ERROR: INVALID I2C PAYLOAD DATA LENGTH [128]; Valid range: 0-32 but that's been fixed in another branch.
Now, that we have a working, ARMv6 build, albeit only compiling on x64 hosts, I can work on getting that fix in place and wrapped up.
Thanks, Robert
REOPENED -- still need to work out compiling from Raspberry Pi (ARM) platform.
Some good news, I've managed to build the cross-compiler on aarch64 based on https://github.com/Pro/raspi-toolchain with some trivial tweaks. The issue with that is that it builds the compiler and then expects you to copy it out of the image and into the real-world to use their. Having said which, I compiled a C++ hello world inside the container, copied it out and it ran fine on a Pi Zero.
Now all I need to do is tweak the container so that it looks like the one you use to run the cross-compiler inside and we're good to go. That should keep me quiet over the weekend!
For reference, the reason I think that it's worthwhile getting all of this to work on the v6 architecture (Pi 1 and Zero) is that while there are a lot of Pi 1s still out there (I use many of my old ones for IoT) the Pi Zero (W and plain) is also highly suited to IoT work. I think it's worth this effort to support the Zeros.
@hackerjimbo ,
Here is the current Docker image being used to build the native libraries.
https://github.com/Pi4J/pi4j-docker/tree/master/pi4j-builder-native
It's the same image for both x86_64
and aarch64
. We can add the aarch64
compiled toolchain (directory) to the container (maybe under /opt/
) and set the system PATH
to include the appropriate toolchain at built time for the specific architecture.
You can see with my latest changes, the toolchain for x64 is getting added here:
If we get the https://github.com/Pro/raspi-toolchain working for aarch64
, I wonder if we should also update the x86_64
to also use the same version.
I can confirm that I have got the https://github.com/Pro/raspi-toolchain working on aarch64
. However, it expects you to copy out the resulting compiler rather than use it inside the container. It's also leaves all the build files lying around so you end up with a massive 8.4 GB beast of an image.
Lots of options here. We could build then export the cross compiler. However, as it's built in a debian/ubuntu environment it may well not like being in “the outside world”.
We could leave it inside its container and, with a few tweaks, compile inside the container. I think this would be the most portable. However, I'd need to make sure that it does what the pi4j-v2 build system thinks it should. In other words, it should look the same to the build system as the existing system.
The container is huge though. It may well be possible to trim it down (quite a lot) but the compiler is still quite large. On investigation I found that it builds the fortran compiler as well (what? are they possessed?) so removing that will save some storage. I could then produce another image from that one without all the build stuff and perhaps that could be uploaded.
@hackerjimbo
Is there a way to extract it all into a .TAR.GZ and we host it somewhere in GitHub for download? The reason I ask is that we currently support building using Docker and building directly on a Linux system (Ubuntu/Debian-based) or on the RPi natively without docker. The current build script have an "install-prerequsites.sh" script to prepare the local environment for cross-compiler builds. Albeit, that part is still broken for ARMv6 building.
I can extract the compile but it is huge. The entire docker image is even bigger, 8.4GB. What were you looking for? Or would you just want the docker build file? That's the easy bit!
@hackerjimbo
The raspi-toolchain project publishes releases that are about ~500mb: https://github.com/Pro/raspi-toolchain/releases
Is something like that possible? Or is there more needed?
We could move to a Docker only build system for native artifacts and totally rely on the Docker builder images. Of course, the Docker image still has to be downloaded to a user's build machine -- and if that's a RaspberryPi downloading a 8GB+ image will take a while and consume a very large amount of available SD card space.
If you would like to share the Dockerfile for building the toolchain, I can have a look and try it here. Maybe that will help me better understand all the details.
Ideally if it's possible to extract the toolchain from the compile and make it somewhat portable, that would be best. Then it could be downloaded and used directly on a RPi4B 64-bit OS or used in our Pi4J Builder images.
So I built the cross compiler without the vile abomination that is Fortran and it's now “only” 7.5G. No problem I thought, I'll add a stage that removes the source code and build areas and we'll be fine. I did this and it didn't reduce the image size.
I believe that this is because of the layering feature of how docker handles file systems.
Does anyone have any ideas how to fix this? One option would be to copy out the actual cross compiler (some 1.1G) and then load those into a plain debian base image. Bearing in mind the compression used in docker hub to would make it about the same same on there as the existing tool chain.
But is there a better way?
By the way, the compiler is easy to extract from the docker image. In fact, that's what the people who made it recommend. It puts everything in /opt/cross-pi-gcc
(though that's configurable).
FYI ... Its probably time to re-visit this issue and get it resolved in advance of a v2.0 release.
I've played around a bit. Should I do so some more? I can build the cross compiler…
It looks like some time back, I switched the cross-compiler docker container to use the RPI tools compiler.
So maybe this issue is now resolved in latest builds?
I have not tested it on ARMv6 but latest builds are now published here:
@hackerjimbo
I had to re-read the thread to try and figure out where we stand ....
So let me ask .. is the only outstanding issue trying to build the native projects directly on a 64-bit Raspberry Pi using a cross-compiler
that will build binaries compatible for ARMv6 architecture?
And the current docker compiler images won't work on the 64-bit Raspberry Pi because the RPI tools version of the cross-compiler does not support running on arm64
platforms.
Thanks, Robert
I think that's it. I can go back and play. I think I got a docker image with the compiler in it on arm64 that compiled for v6 32-bit. Actually, it appears that it build the compile which could then be extracted to run outside the docker image as a cross-compiler.
If we can get the cross-compiler portable, I can probably easily get it pushed into the Pi4J builder images.
I can send you the docker file that builds it. It would be possible to extract the install directories and run them outside of the docker image.
I can send you the docker file…
Sure forward along and I'll git it a try ... hopefully tomorrow.
@hackerjimbo
What platform/architecture did you run this container on to perform the build?
A Pi 4 in 64-bit mode. However, I suspect it'll run on anything. Give it a poke!
Yes! You've got some cached files. Drop a --no-cache
on that bad boy (on the docker build command line) and see what happens.
Yes! You've got some cached files. Drop a
--no-cache
on that bad boy (on the docker build command line) and see what happens.
:-)
I deleted my earlier post because I figured out some of my issues :-). Building on Pi now --- waiting for it to complete.
Just a small update. I was able to successfully build the cross-compiler on ARM64. I still need to test it and package it up for re-use in the Pi4J build process.
Awesome news!
As noted in issue #27, the native builds for ARMv6 (Pi Zeros and all Generation 1 models) are not working.
The 32-bit binaries compiled are not ARMv6 compatible.
This issue affects the following Raspberry Pi models
(from https://en.wikipedia.org/wiki/Raspberry_Pi)
Changes to native build happened here: https://github.com/Pi4J/pi4j-v2/commit/a7c29d098920a11d99217fe2b4a24c596709e9cf
I switched from ARM compiler toolchain available from RaspberryPi Tools to a newer GCC version 32-bit cross-compiler (
gcc-arm-linux-gnueabihf
) toolchain available in APT repositories.REF: https://github.com/Pro/raspi-toolchain
I probably need to instrument one of these custom toolchains in the build logic for building 32-bit rather than the default linaro
gcc-arm-linux-gnueabihf