Closed jsturdy closed 4 years ago
Installation of cross-compiled binaries is a very complicated problem, that to my knowledge has only been addressed by Debian so far. An rationale for what they are doing can be found on the Debian wiki.
Basically, instead of the traditional structure $PREFIX/lib
, $PREFIX/include
, $PREFIX/bin
, Debian uses $PREFIX/lib/(triplet)
, $PREFIX/include/(triplet)
etc, where triplet
is the system type used by Autotools for cross-compilation. Examples relevant for us would be x86_64-redhat-linux
(Red Hat omits the -gnu
suffix) and arm-linux-gnueabihf
.
Contrary to Debian, we are not installing our software to /usr
, so we can afford changing the $PREFIX
(we already do it). I would hence not use exaclty the same structure as Debian. Here's my proposal for packages supporting several architectures:
TARGET?=$(shell gcc -dumpmachine)
variable to the Makefiles.CC?=$TARGET-gcc
(also works for the system GCC)CXX?=$TARGET-g++
$CC
and $CXX
exist before doing anything else$TARGET
. This is easy for packages following a similar pattern, but notably CC7 packages don't/opt/$PACKAGE/$TARGET/bin/
/opt/$PACKAGE/$TARGET/lib/
/opt/$PACKAGE/$TARGET/include/
A script would be provided in /opt/$PACKAGE
to add the correct paths to the environment based on gcc -dumbmachine
or a triplet passed as an argument.
The general structure outlined above would allow one to install packages for arm
alongside packages for x86_64
, providing development files and libraries to link against. For symmetry, it can be reproduced on the CTP7 under /mnt_persistent/opt
; the modules can it by specifying the correct RPATH
. (Note that the RPATH
will likely need to be different on the CTP7 and on the PC.)
Now about this repo, it would create the following packages:
xhal-common
with the script to set environment variables and all architecture-independent files located outside /opt
.Makefile
twice:
xhal-base
with target x86_64-redhat-linux
xhal-base-devel
with target x86_64-redhat-linux
xhal-base-ctp7
with target arm-linux-gnueabihf
xhal-base-devel-ctp7
with target arm-linux-gnueabihf
xhal-server-tools-ctp7
with target arm-linux-gnueabihf
xhal-server-tools-devel-ctp7
with target arm-linux-gnueabihf
xhal-client-tools
with target x86_64-redhat-linux
xhal-client-tools-devel
with target x86_64-redhat-linux
In order to support the GLIB, the server-tools
packages should also be buildable for x86_64-redhat-linux
(without the -ctp7
suffix). I'm not sure how much support rpm
has for multiple architectures (eg if it is possible to install a package for arm
on i686
?); this may affect the package names.
eg if it is possible to install a package for
arm
oni686
It is not... (if standard practices are followed), as RPM has support for arch-dependent and noarch
packages, but I don't see this as a problem because the arm
devel
packages will never need to be installed on the CTP7, only on the development PC, so it ends up only being a problem of build order and packaging (and whether we just have one devel
package that allows host development alongside cross compiling development, or have two "devel
" packages, but both packaged only for arch x86_64
).
Related to that (and to the core problem this issue is trying to sort out), is the content of your proposed xhal-<name>
and xhal-<name>-ctp7
different besides the architecture?
If not, there is really no need for a distinction, as the -ctp7
package will only be installed (modulo the proviso above re -devel
packages) on an arm
arch, and we can just use the Arch
property of RPM (which would also apply to the xhal-server-tools
built for the PC emulator)
Might be good to just sketch out the package contents for the proposed packages
- Change the binary install path to
/opt/$PACKAGE/$TARGET/bin/
- Change the library install path to
/opt/$PACKAGE/$TARGET/lib/
- Change the header install path to
/opt/$PACKAGE/$TARGET/include/
This is reasonable to me, but based on the comments above, I would only do this for the cross installable packages, e.g., the -ctp7-devel
packages
eg if it is possible to install a package for arm on i686
It is not... (if standard practices are followed), as RPM has support for arch-dependent and noarch packages, but I don't see this as a problem because the arm devel packages will never need to be installed on the CTP7, only on the development PC
On Debian you can ask for the package for another architecture to be installed, eg firefox:i386
. This avoids duplicating packages between architectures and provides great flexibility. If it's not possible with RPM, we need to use different names for PC and CTP7 packages.
Related to that (and to the core problem this issue is trying to sort out), is the content of your proposed
xhal-<name>
andxhal-<name>-ctp7
different besides the architecture? If not, there is really no need for a distinction, as the-ctp7
package will only be installed (modulo the proviso above re-devel
packages) on anarm
arch, and we can just use theArch
property of RPM (which would also apply to thexhal-server-tools
built for the PC emulator)
The package contents would be exactly the same (same libs, same exported symbols). The ARM libraries are required on x86 for cross-compilation, and traditionally -devel
packages do not include binaries.
Might be good to just sketch out the package contents for the proposed packages
xhal-common
/opt/xhal/xhalenv.sh
/opt/xhal/xhalenv.csh
xhal-base: xhal-common
/opt/xhal/x86_64-redhat-linux/lib/libxhal-base.so
xhal-base-devel : xhal-base
/opt/xhal/x86_64-redhat-linux/include/xhal/.../stuff.h # If anything is configured at compile time, this is the x86 version
xhal-base-ctp7: xhal-common
/opt/xhal/arm-linux-gnueabi/lib/libxhal-base.so
xhal-base-devel-ctp7: xhal-base-ctp7
/opt/xhal/arm-linux-gnueabi/include/xhal/.../stuff.h # If anything is configured at compile time, this is the ARM version
xhal-server-tools-ctp7: xhal-base-ctp7
/opt/xhal/arm-linux-gnueabi/lib/libxhal-server-tools.so
xhal-server-tools-devel-ctp7: xhal-server-tools-ctp7 xhal-devel-ctp7
/opt/xhal/arm-linux-gnueabi/include/lmdb++.h
/opt/xhal/arm-linux-gnueabi/include/xhal/LMDB.h
- Change the binary install path to
/opt/$PACKAGE/$TARGET/bin/
- Change the library install path to
/opt/$PACKAGE/$TARGET/lib/
- Change the header install path to
/opt/$PACKAGE/$TARGET/include/
This is reasonable to me, but based on the comments above, I would only do this for the cross installable packages, e.g., the -ctp7-devel packages
I think it's easier not to have a privileged arch and and keep the x86_64 $PREFIX
clean, but I can see no technical argument against it.
eg if it is possible to install a package for arm on i686
It is not... (if standard practices are followed), as RPM has support for arch-dependent and noarch packages, but I don't see this as a problem because the arm devel packages will never need to be installed on the CTP7, only on the development PC
On Debian you can ask for the package for another architecture to be installed, eg
firefox:i386
. This avoids duplicating packages between architectures and provides great flexibility. If it's not possible with RPM, we need to use different names for PC and CTP7 packages.
Actually, passing an argument to the rpm
command, it is possible to ignore the packaged architecture, but I'm not sure when installing through a package manager. In any event, where that package is installed is entirely determined by the RPM itself, and while toolchains seem to have adopted a hierarchy similar to outlined, I don't find this particularly beneficial for our particular situation, and I'll attempt to lay out the reasoning with my proposal below.
Related to that (and to the core problem this issue is trying to sort out), is the content of your proposed
xhal-<name>
andxhal-<name>-ctp7
different besides the architecture? If not, there is really no need for a distinction, as the-ctp7
package will only be installed (modulo the proviso above re-devel
packages) on anarm
arch, and we can just use theArch
property of RPM (which would also apply to thexhal-server-tools
built for the PC emulator)The package contents would be exactly the same (same libs, same exported symbols). The ARM libraries are required on x86 for cross-compilation, and traditionally
-devel
packages do not include binaries.Might be good to just sketch out the package contents for the proposed packages
xhal-common /opt/xhal/xhalenv.sh /opt/xhal/xhalenv.csh xhal-base: xhal-common /opt/xhal/x86_64-redhat-linux/lib/libxhal-base.so xhal-base-devel : xhal-base /opt/xhal/x86_64-redhat-linux/include/xhal/.../stuff.h # If anything is configured at compile time, this is the x86 version xhal-base-ctp7: xhal-common /opt/xhal/arm-linux-gnueabi/lib/libxhal-base.so xhal-base-devel-ctp7: xhal-base-ctp7 /opt/xhal/arm-linux-gnueabi/include/xhal/.../stuff.h # If anything is configured at compile time, this is the ARM version xhal-server-tools-ctp7: xhal-base-ctp7 /opt/xhal/arm-linux-gnueabi/lib/libxhal-server-tools.so xhal-server-tools-devel-ctp7: xhal-server-tools-ctp7 xhal-devel-ctp7 /opt/xhal/arm-linux-gnueabi/include/lmdb++.h /opt/xhal/arm-linux-gnueabi/include/xhal/LMDB.h
Here's my new counter proposal with the following rationale:
1) What we have is a two level package with
1.a) a common core that can run on potentially several architectures (x86
, arm
)
1.b) a "remote" functionality that can run on potentially several architectures (x86
, arm
)
1.c) a set of applications that will only ever run on a single architecture (x86
)
1) We will probably only ever cross compile, but the includes will always be the same between architectures
2.a) non-native libraries are needed for linking during the cross compilation development (to me, semantically, should be provided with a -devel
package, as is in fact done for lmdb
, OK, it's just a symlink)
With these guiding principles (and a bit of restructuring of some things) I was able to generate the following packages (I think we can rethink the names a bit, but just using the terminology currently proposed by @lmoureaux)
xhal
(x86
and arm
, but on arm
the prefix is /mnt/persistent/xhal
)xhal /opt/xhal/lib/libxhal-base.so lrwxrwxrwx
xhal /opt/xhal/lib/libxhal-base.so.3.2 lrwxrwxrwx
xhal /opt/xhal/lib/libxhal-base.so.3.2.2 -rwxr-xr-x
xhal-devel
(x86 only)xhal-devel /opt/xhal/include/common drwxr-xr-x
xhal-devel /opt/xhal/include/common/xhal drwxr-xr-x
xhal-devel /opt/xhal/include/common/xhal/XHALDevice.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/XHALInterface.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc drwxr-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/call.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/call_backup.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/common.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/compat.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/exceptions.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/helper.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/register.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/wiscRPCMsg.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/rpc/wiscrpcsvc.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils drwxr-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils/Exception.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils/PyTypes.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils/XHALXMLNode.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils/XHALXMLParser.h -rw-r-xr-x
xhal-devel /opt/xhal/include/common/xhal/utils/test.h -rw-r-xr-x
xhal-devel /opt/xhal/lib/arm/libxhal-base.so lrwxrwxrwx
xhal-devel /opt/xhal/lib/arm/libxhal-base.so.3.2 lrwxrwxrwx
xhal-devel /opt/xhal/lib/arm/libxhal-base.so.3.2.2 -rwxr-xr-x
xhal-server
(x86
and arm
, but on arm
the prefix is /mnt/persistent/xhal
)xhal-server /opt/xhal/lib/libxhal-server.so lrwxrwxrwx
xhal-server /opt/xhal/lib/libxhal-server.so.3.2 lrwxrwxrwx
xhal-server /opt/xhal/lib/libxhal-server.so.3.2.2 -rwxr-xr-x
xhal-server-devel
(x86
only)xhal-server-devel /opt/xhal/include/server drwxr-xr-x
xhal-server-devel /opt/xhal/include/server/lmdb++.h -rw-r-xr-x
xhal-server-devel /opt/xhal/include/server/xhal drwxr-xr-x
xhal-server-devel /opt/xhal/include/server/xhal/LMDB.h -rw-r-xr-x
xhal-server-devel /opt/xhal/lib/arm/libxhal-server.so lrwxrwxrwx
xhal-server-devel /opt/xhal/lib/arm/libxhal-server.so.3.2 lrwxrwxrwx
xhal-server-devel /opt/xhal/lib/arm/libxhal-server.so.3.2.2 -rwxr-xr-x
xhal-client
(x86
only)xhal-client /opt/xhal/lib/libxhal-rpcman.so lrwxrwxrwx
xhal-client /opt/xhal/lib/libxhal-rpcman.so.3.2 lrwxrwxrwx
xhal-client /opt/xhal/lib/libxhal-rpcman.so.3.2.2 -rwxr-xr-x
xhal-client /opt/xhal/lib/xhalpy.so lrwxrwxrwx
xhal-client /opt/xhal/lib/xhalpy.so.3.2 lrwxrwxrwx
xhal-client /opt/xhal/lib/xhalpy.so.3.2.2 -rwxr-xr-x
xhal-client-devel
(x86
only)xhal-client-devel /opt/xhal/include/client drwxr-xr-x
xhal-client-devel /opt/xhal/include/client/xhal drwxr-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc drwxr-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc/calibration_routines.h -rw-r-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc/daq_monitor.h -rw-r-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc/optohybrid.h -rw-r-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc/utils.h -rw-r-xr-x
xhal-client-devel /opt/xhal/include/client/xhal/rpc/vfat3.h -rw-r-xr-x
xhal-debuginfo
/xhal-debugsources
(x86
for sure, possibly also for arm
?)Only one package for the whole set, separate debuginfos
/debugsources
are added in a later version of RPM
- Change the binary install path to
/opt/$PACKAGE/$TARGET/bin/
- Change the library install path to
/opt/$PACKAGE/$TARGET/lib/
- Change the header install path to
/opt/$PACKAGE/$TARGET/include/
This is reasonable to me, but based on the comments above, I would only do this for the cross installable packages, e.g., the -ctp7-devel packages
I think it's easier not to have a privileged arch and and keep the x86_64
$PREFIX
clean, but I can see no technical argument against it.
In my mind it's not "privileged" insomuch as "native"
For more info, what I've done is the following:
xhal
directory into the project rootxhal
has include
and src
directories where all headers and sources live
common
, server
, and client
, of sources/headers are separated, determined by which libraries/RPMs they feed intoxhalcore
and xhalarm
have Makefile
, spec.template
, and include/packageinfo.h
(could imagine that this would be changed to ensure proper Requires
/BuildRequires
for subpackages...)
Makefile
sets up the include/source dir to point to xhal/{include,src}
, adding appropriate subpackages as necessaryxhal
directory, but I don't feel strongly about that, the reason to do so would be as mentioned earlier to slim down the duplicationI tested that an installed package structure like this works fine with developing ctp7_modules
(though the second half of this packaging issue will be getting the ctp7_modules
package structured for the devel
part that is required on the client
side, as well as the modules for the GLIB emulator, though a similar mechanism can be imagined)
Random remarks:
rpcman
will be superseded by the new templated calling syntax and can be dropped.xhalpy
, as it mixes generic (connect
) and specific (getmonTRIGGERmain
) functionality. In order to avoid circular dependencies between repos, it should probably be moved to ctp7_modules
. lmdb++
. Maybe it would make sense to use it instead of bundling our own copy.xhal/include/client/xhal
and xhal/include/server/xhal
directories. If we want to split the functionality, let's use xhal/include/xhal/xxx
instead (Qt does this).-devel
packages should not include binaries. Consider the use case of developing on a GLIB-based setup: there is no need for ARM binaries there. Thinking about the future, it sounds even more wrong once you realize that newer Zynq-based platforms use the aarch64
instruction set. Let's not create a packaging chimera.xhal/rpc/calibration_routines.h
et al.) be always in sync with the installed modules, they should be provided directly by ctp7_modules
. Otherwise they will end up being out of sync and bad things will happen.Random remarks:
* I agree that my names aren't good, but couldn't find anything better. "client" and "server" are from the Wisc. RPC point-of-view.
Yeah, for lack of a more creative/expressive naming scheme, I think what you've been using will probably be fine.
* `rpcman` will be superseded by the new templated calling syntax and can be dropped.
Quite. I wasn't ever really clear on the duplication of this, but I think it was needed for the xhalpy
bindings, at the time.
* I'm not sure what to do with `xhalpy`, as it mixes generic (`connect`) and specific (`getmonTRIGGERmain`) functionality. In order to avoid circular dependencies between repos, it should probably be moved to `ctp7_modules`.
This will (entirely I believe) be replaced by the cmsgemos
python interface, as all functionality will be exposed there, so it can probably be dropped. Though @mexanick and I had discussed the longer term plans some time ago, so he may have further insights.
* I just realized that Fedora has [a package for `lmdb++`](https://fedora.pkgs.org/29/fedora-x86_64/lmdbxx-devel-0.9.14.1-2.20160229git0b43ca8.fc29.noarch.rpm.html). Maybe it would make sense to use it instead of bundling our own copy.
Possibly..., lmdb
(and friends -libs
and -devel
) are available in "standard" CERN repos, but not lmdb++
.
We can possibly ask that it get included (either the CMS sysadmins or the CERN linuxsoft folks), but until such a time as it is, it will probably be simpler to bundle one header with our package (at P5 every package that isn't in a standard CERN repo has to be part of our RPM dropbox, so we'd have to provide the RPM, and I mostly want to minimize the maintenance of external dependencies... although, this package shouldn't ever be needed at P5)
* i don't like the `xhal/include/client/xhal` and `xhal/include/server/xhal` directories. If we want to split the functionality, let's use `xhal/include/xhal/xxx` instead (Qt does this).
It's really the cleanest way to separate the subpackages, which themselves may have nested namespaces, while maintaining current #include xhal/blah
syntax (I agree, your proposed way is probably the better way to get the include semantics correct, so it may be well worth it to make the change with these other packaging changes)
* `-devel` packages should not include binaries. Consider the use case of developing on a GLIB-based setup: there is no need for ARM binaries there. Thinking about the future, it sounds even more wrong once you realize that newer Zynq-based platforms use the `aarch64` instruction set. Let's not create a packaging chimera.
I have considered all these cases, and the driving motivation for me is the difference between a runtime package and a development package.
For any non-native architecture, libraries are needed, but only for development.
If we use the proper chip arch (which for the CTP7 is armv7l
, and which is how the RPM is scoped) then the current model can trivially accommodate future additional boards (this should really be extracted from the PETA_STAGE
somehow, so that it is a flexible thing
If you would like to find an example of a package similar to our particular use case (common-client-server(ish) model, all development takes place on the "client" side whether via native or cross compilation), packaged for RHEL/Centos/Fedora, I'd gladly look at how they package things up.
I want to avoid a case where the PC (for generic DAQ development) needs:
xhal-base
xhal-server
(for Docker
RPC emulator, which I have long envisioned being used as a dummy test suite when actual hardware is not present, e.g., in CI)xhal-client
xhal-base-devel
xhal-server-devel
xhal-client-devel
ctp7-xhal-server-libs
(for developing ctp7_modules
for the CTP7)apx-xhal-server-libs
(for developing ctp7_modules
for the APx)bcp-xhal-server-libs
(for developing ctp7_modules
for the BCP)Sure, 9 packages aren't a ton but it does seem to be overkill.
We can easily condense the devel
package into a single one, containing all headers since there's no really strong motivating reason to have them separated other than the one -devel
per (sub)package.
We can also even further simplify things if we don't even bother to have the -server
and -client
subpackages, and allow the xhal
(xhal-base
) package (but not the libraries) to have different content on different architectures, as needed
However, splitting up the non-native libs for linking, to me, makes less sense, because if someone is developing ctp7_modules
, they had better be certain that it is compatible with all HW we target, and to me, the easiest way to do this is to ensure that all target libs are present in the -devel
package(s).
Coming back to the proposal from today's meeting of packaging the PETA_STAGE
area, it's really quite large:
tree $PETA_STAGE|wc
7654 33513 355427
du -sh $PETA_STAGE
161M /data/bigdisk/sw/peta-stage
But i'm trying to turn it into an RPM that we can just then install to /opt/peta_stage
(or somewhere) and have a /opt/peta_stage/ctp7
subdir for the CTP7 image.
If we would need multiple CTP7 images concurrently, this might have to be rethought, but the idea would then be that the, e.g., ctp7-xhal-server-libs
(or ctp7-xhal-libs
) package could install into this tree, and have this peta-stage-ctp7
package as a requirement.
* Since it is really essential that the RPC headers (`xhal/rpc/calibration_routines.h` et al.) be always in sync with the installed modules, they should be provided directly by `ctp7_modules`. Otherwise they will end up being out of sync and bad things will happen.
Yes, and I think that this part of the package will be removed.
I have put together a setup that will produce the following "release"
release
├── api
└── repos
├── centos7_x86_64
│ ├── DEBUGRPMS
│ │ ├── reg_interface_gem-debuginfo-3.2.2-1.0.22.dev.9b13bc4git.centos7.python2.7.x86_64.rpm
│ │ └── xhal-debuginfo-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ └── RPMS
│ ├── reg_interface_gem-3.2.2-1.0.22.dev.9b13bc4git.centos7.python2.7.x86_64.rpm
│ ├── xhal-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ ├── xhal-client-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ ├── xhal-client-devel-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ ├── xhal-devel-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ ├── xhal-server-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
│ └── xhal-server-devel-3.2.2-1.0.22.dev.9b13bc4git.centos7.gcc4_8_5.x86_64.rpm
├── noarch
│ └── RPMS
│ └── ctp7-xhal-libs-3.2.2-1.0.22.dev.9b13bc4git.peta.arm_linux_gnueabihf_gcc4_9_2.noarch.rpm
├── peta_armv7l
│ ├── DEBUGRPMS
│ │ └── xhal-debuginfo-3.2.2-1.0.22.dev.9b13bc4git.peta.arm_linux_gnueabihf_gcc4_9_2.armv7l.rpm
│ └── RPMS
│ ├── xhal-3.2.2-1.0.22.dev.9b13bc4git.peta.arm_linux_gnueabihf_gcc4_9_2.armv7l.rpm
│ └── xhal-server-3.2.2-1.0.22.dev.9b13bc4git.peta.arm_linux_gnueabihf_gcc4_9_2.armv7l.rpm
├── SRPMS
│ ├── reg_interface_gem-3.2.2-1.0.22.dev.9b13bc4git.src.rpm
│ └── xhal-3.2.2-1.0.22.dev.9b13bc4git.src.rpm
├── tarballs
│ ├── reg_interface_gem-3.2.2_1.0.22.dev.9b13bc4git.tgz
│ ├── reg_interface_gem-3.2.2-final.dev22.tgz
│ ├── reg_interface_gem-3.2.2-final.dev22.zip
│ ├── reg_interface_gem-3.2.2.zip
│ ├── xhal-arm-3.2.2-1.0.22.dev.9b13bc4git.tbz2
│ └── xhal-x86_64-3.2.2-1.0.22.dev.9b13bc4git.tbz2
├── xhal_3_2_centos7_x86_64.repo
└── xhal_3_2_peta_armv7l.repo
The odd one out there, ctp7-xhal-libs-3.2.2-1.0.22.dev.9b13bc4git.peta.arm_linux_gnueabihf_gcc4_9_2.noarch.rpm
, is created during the CTP7 run through the spec file (I found that and the package release naming handy to basically document against which peta
stage the libs are valid), and puts files in:
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-base.so lrwxrwxrwx
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-base.so.3.2 lrwxrwxrwx
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-base.so.3.2.2 -rwxr-xr-x
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-server.so lrwxrwxrwx
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-server.so.3.2 lrwxrwxrwx
ctp7-xhal-libs /opt/gem-peta-stage/ctp7/mnt/persistent/xhal/lib/libxhal-server.so.3.2.2 -rwxr-xr-x
It installs happily alongside the x86_64
xhal
package.
Currently there are -devel
packages for each xhal
subpackage, and the groups file defines groups that will pull all required packages depending on the use case.
The headers are packaged in the $XHAL_ROOT/include/xhal/<subdir>
, where subdir
is one of: common
, client
, server
, or extern
(no source modification has yet been done to adapt to this scheme...
The repo directory structure is now like:
xhal/xhal
├── include
│ └── xhal
│ ├── client
│ │ └── rpc
│ ├── common
│ │ ├── rpc
│ │ └── utils
│ ├── extern
│ └── server
└── src
├── client
│ ├── python_wrappers
│ └── rpc_manager
├── common
│ ├── rpc
│ └── utils
└── server
Within this structure, multiple "targets" can be built.
It currently has definitions for x86_64
and arm
, but I envision this changing from arm
to ctp7
/apx
/bcp
and extending slightly the logic to handle the different sysroot
required for each and different compiler flags, but could also put all this in a package-cfg
file with the appropriate gem-peta-stage
package (a new prereq of the build)
Currently, extern
contains only lmdb++.h
, so it would get packaged with the server
subpackage, however, if we package one or two more things that are dependencies, we'll have to revisit this, and at that point, it may just be better to have a single -devel
package.
I built this on top of additional changes I have in preparation for the next release, so I would be fine merging #131 (provided the only outstanding issue there is the build system integration), and taking care of the migration to this newer structure in my PR
While migrating the headers from file path:
include/xhal/<bar>/<name>.h
to
include/xhal/<foo>/<bar>/<name>.h
certain additional changes are imposed, such as when using INCLUDE_DIRS+=XHAL_ROOT/include
, the #include
must change similarly, from
#include "xhal/<bar>/<name>.h"
to the
#include "xhal/<foo>/<bar>/<name>.h"
And with this nomenclature, if we keep the namespaces reflecting the package include structure, we would then need to rewrite the namespaces from
namespace xhal {
namespace bar {
}
}
to
namespace xhal {
namespace foo {
namespace bar {
}
}
}
This becomes quite a change, which I'm fine to do, provided we agree it's the "right" thing to do.
We could simply update all the #include
s and leave the namespaces as they are, for the sake of fewer changes.
On the other hand, with my initial proposal of simply having some structure like:
XHAL_ROOT
include
common
xhal
<bar>
<name>.h
server
xhal
<bar>
<name>.h
client
xhal
<bar>
<name>.h
The INCLUDE_DIRS
would need to be set depending on the package(s) needed, e.g., INCLUDE_DIRS+=XHAL_ROOT/include/common
, but the code itself wouldn't need to change at all (one of the motivations for why I proposed such a structure).
However, the #include
structure then becomes ambiguous, when developing dependent packages...
A first go would look like this (rpc
split, on the client
side, renamed rpcman
, but namespaces not changed, though this part will likely be dropped):
../../include
├── packageinfo.h
└── xhal
├── client
│ ├── rpcman
│ │ ├── calibration_routines.h
│ │ ├── daq_monitor.h
│ │ ├── optohybrid.h
│ │ ├── utils.h
│ │ └── vfat3.h
│ ├── utils
│ │ └── PyTypes.h
│ ├── XHALDevice.h
│ └── XHALInterface.h
├── common
│ ├── rpc
│ │ ├── call_backup.h
│ │ ├── call.h
│ │ ├── common.h
│ │ ├── compat.h
│ │ ├── exceptions.h
│ │ ├── helper.h
│ │ └── register.h
│ └── utils
│ ├── Exception.h
│ ├── test.h
│ ├── XHALXMLNode.h
│ └── XHALXMLParser.h
├── extern
│ ├── lmdb++.h
│ ├── wiscRPCMsg.h
│ └── wiscrpcsvc.h
└── server
└── LMDB.h
First of all, all my apologies for my late and not as complete as I wished reply; I cannot really find quiet time to focus here (Physics School) even if I keep thinking about this packaging issue. After carefully reading the discussion, here are some thoughts.
I don't know what was discussed during the meeting about the PETA_STAGE
area, but I feel it can be the way to go. Sure, the RPM would be big (slightly smaller that what you show though since xerces
, lmdb
and log4cplus
are currently installed in this area), but for embedded development it seems quite usual. It can be seen as the sysroot provided as a whole by an SDK. It also makes more sense because using an organization similar to Debian's is impossible.
Going in that direction, all development packages (containing both the libraries and header files) for cross-compilation can be installed in the Peta Linux sysroot. Depending on a specific Peta Linux sysroot and duplicating header files is not a problem to me since (1) custom compiled libraries might have to be different for different sysroot versions and (2) one might need to provide configured headers at some point to cope with different targets.
In the end we would get, for each supported cross-compiled architecture, only a few packages: the sysroot and the required libraries (e.g. xerces
, lmdb
, log4cplus
and xhal
for the CTP7 and only xhal
for the GLIB). If xhal
is split in different packages on the remote targets, I would split the development packages as well in order to keep the naming scheme as clear as possible. The include paths would also be far much easier since only the --sysroot
compiler option would be required instead of multiple -I
directive. Another possibility (with some -I
directives though) would be to install the library in their remote architecture location in the sysroot so that using RPATH
s would become straightforward.
The drawback would be the synchronization between the packages for different targets. This concerns only the RPC ABI compatibility which is in any case checked at run-time (we cannot do better for the ctp7_modules
modules versions anyway since the machines are different). This is also mitigated by the fact that all the newest packages should be present in the repository and installed with yum upgrade
. Moreover, the development machine and the run-time targets can (are going to?) be different.
I'm also wondering how do the dependencies work with the last proposed solution? Does xhal-server-devel
depend on xhal-server
? If so, how would one install the development package without the libraries package if she is only interested in the CTP7/GLIB/... development? Does xhal-server-devel
provide the .so
links to the SONAME (as it should)? If yes, it does not makes sense to set xhal-server-devel
as noarch
for CTP7-only development. Also, does ctp7-xhal-libs
provide the .so
while not being a -devel
package?
I feel that the solution is trying to follow the Debian path (which is good), but without the tools so that some arguable design decisions are constrained.
Regarding the package/library separation, I don't have strong opinion about it. What matters the most to me is that the same libraries on different targets provide the same symbols. For the include
's, my choice goes to changing the code to follow @lmoureaux /Qt organization.
Coming back to lmdb++
, the only upstream Git repositories I could find are not maintained since a few years. We also patched the header provided currently in the ctp7_modules
in order to add new useful overloads. Therefore, I think we should package it ourself.
Finally, thinking about the ctp7_modules
which will leverage this new organization, I think this should be quite easy. One package should be produced for every "remote" target while only the development package should be provided in the DAQ machine somewhere in /opt/
.
First of all, all my apologies for my late and not as complete as I wished reply; I cannot really find quiet time to focus here (Physics School) even if I keep thinking about this packaging issue. After carefully reading the discussion, here are some thoughts.
I don't know what was discussed during the meeting about the
PETA_STAGE
area, but I feel it can be the way to go. Sure, the RPM would be big (slightly smaller that what you show though sincexerces
,lmdb
andlog4cplus
are currently installed in this area), but for embedded development it seems quite usual. It can be seen as the sysroot provided as a whole by an SDK. It also makes more sense because using an organization similar to Debian's is impossible.
I've basically decided that yes, providing a PETA_STAGE
for various target boards is worthwhile (would prefer it if the image provider provided this, but it's trivial to build so can do it ourselves for now), and have produced the RPM for the CTP7 and pushed it to the gemos-extras
repo, though not to the noarch
repo I will eventually put it in.
Going in that direction, all development packages (containing both the libraries and header files) for cross-compilation can be installed in the Peta Linux sysroot. Depending on a specific Peta Linux sysroot and duplicating header files is not a problem to me since (1) custom compiled libraries might have to be different for different sysroot versions and (2) one might need to provide configured headers at some point to cope with different targets.
Libraries yes, headers, don't see the point.
Additionally, if a person is developing against a non-released version of a dependent library, they won't (by design) have the ability to modify the PETA_STAGE
area, but would have to use the INSTALL_PREFIX=/some/developer/path make install
, and should then specify this in the relevant variable that the Makefile
expands, e.g., XHAL_ROOT
In the end we would get, for each supported cross-compiled architecture, only a few packages: the sysroot and the required libraries (e.g.
xerces
,lmdb
,log4cplus
andxhal
for the CTP7 and onlyxhal
for the GLIB). Ifxhal
is split in different packages on the remote targets, I would split the development packages as well in order to keep the naming scheme as clear as possible. The include paths would also be far much easier since only the--sysroot
compiler option would be required instead of multiple-I
directive. Another possibility (with some-I
directives though) would be to install the library in their remote architecture location in the sysroot so that usingRPATH
s would become straightforward.
I don't think this is true, considering that even on a native compiler, one still has to provide explicitly the -I
directives for any non-system libraries, and we're not going to be putting the GEM libraries into the system path, even in the PETA_STAGE
with the package specific <board>-<package>-libs
RPM
The drawback would be the synchronization between the packages for different targets. This concerns only the RPC ABI compatibility which is in any case checked at run-time (we cannot do better for the
ctp7_modules
modules versions anyway since the machines are different). This is also mitigated by the fact that all the newest packages should be present in the repository and installed withyum upgrade
. Moreover, the development machine and the run-time targets can (are going to?) be different.I'm also wondering how do the dependencies work with the last proposed solution? Does
xhal-server-devel
depend onxhal-server
? If so, how would one install the development package without the libraries package if she is only interested in the CTP7/GLIB/... development? Doesxhal-server-devel
provide the.so
links to the SONAME (as it should)? If yes, it does not makes sense to setxhal-server-devel
asnoarch
for CTP7-only development. Also, doesctp7-xhal-libs
provide the.so
while not being a-devel
package?
In the current proposal (maybe the aforementioned "last proposed"?)
xhal-<subpackage>
packages provide libraries and executables only, (relevant symlinks to SONAMEs are included)xhal-<subpackage>-devel
packages provide only header files, and are only provided for the development PC, and currently has a dependency on the xhal-<subpackage>
package, because the assumption is that one would need to link against the appropriate library (seems to be standard RPM practice)<board>-xhal-libs
provides the <board>
specific libraries (and relevant symlinks), and installs them into the appropriate location (where they would be installed on the <board>
) inside the PETA_STAGE
tree, for the only stage we currently have, /opt/gem-peta-stage/ctp7
I view the -devel
dependency on the parent package as obvious.
Then, for instance, someone developing ctp7_modules
would have an additional requirement (BuildRequires
) on all target <board>-xhal-libs
packages (I haven't included in this list anything that is running in the docker
emulator because I didn't feel like duplicating the host libs, and this dir can be mounted into the docker
container into the appropriate location as needed.)
I feel that the solution is trying to follow the Debian path (which is good), but without the tools so that some arguable design decisions are constrained.
Regarding the package/library separation, I don't have strong opinion about it. What matters the most to me is that the same libraries on different targets provide the same symbols. For the
include
's, my choice goes to changing the code to follow @lmoureaux /Qt organization.
This has been done and a PR showing the structure is coming (later today)
Coming back to
lmdb++
, the only upstream Git repositories I could find are not maintained since a few years. We also patched the header provided currently in thectp7_modules
in order to add new useful overloads. Therefore, I think we should package it ourself.
Agreed
Finally, thinking about the
ctp7_modules
which will leverage this new organization, I think this should be quite easy. One package should be produced for every "remote" target while only the development package should be provided in the DAQ machine somewhere in/opt/
.
Exactly
Going in that direction, all development packages (containing both the libraries and header files) for cross-compilation can be installed in the Peta Linux sysroot. Depending on a specific Peta Linux sysroot and duplicating header files is not a problem to me since (1) custom compiled libraries might have to be different for different sysroot versions and (2) one might need to provide configured headers at some point to cope with different targets.
Libraries yes, headers, don't see the point.
Imagine that a future system uses 64 bits words. It will require headers with uint64_t
instead of uint32_t
. This is usually decided at compile time, and a configured header gets written with the correct aliases in place (usually a .h
file from .h.in
). This way there is no need for error prone macro magic to detect the architecture, as it can be done in `configure. This is really a standard practice.
Additionally, if a person is developing against a non-released version of a dependent library, they won't (by design) have the ability to modify the
PETA_STAGE
area, but would have to use theINSTALL_PREFIX=/some/developer/path make install
, and should then specify this in the relevant variable that theMakefile
expands, e.g.,XHAL_ROOT
-I
directives take precedence over --sysroot
, so overriding is easy. Providing options to do this is the job of the build system.
I'm also wondering how do the dependencies work with the last proposed solution? Does
xhal-server-devel
depend onxhal-server
? If so, how would one install the development package without the libraries package if she is only interested in the CTP7/GLIB/... development? Doesxhal-server-devel
provide the.so
links to the SONAME (as it should)? If yes, it does not makes sense to setxhal-server-devel
asnoarch
for CTP7-only development. Also, doesctp7-xhal-libs
provide the.so
while not being a-devel
package?In the current proposal (maybe the aforementioned "last proposed"?)
all
xhal-<subpackage>
packages provide libraries and executables only, (relevant symlinks to SONAMEs are included)all
xhal-<subpackage>-devel
packages provide only header files, and are only provided for the development PC, and currently has a dependency on thexhal-<subpackage>
package, because the assumption is that one would need to link against the appropriate library (seems to be standard RPM practice)
<board>-xhal-libs
provides the<board>
specific libraries (and relevant symlinks), and installs them into the appropriate location (where they would be installed on the<board>
) inside thePETA_STAGE
tree, for the only stage we currently have,/opt/gem-peta-stage/ctp7
Actually the top-level .so
link should be provided by the -devel
package, see eg zlib-devel
. Laurent is pointing out that in this case, all -devel
packages should depend on the whole ctp7-xhal-libs
(and possibly others in the future).
Coming back to
lmdb++
, the only upstream Git repositories I could find are not maintained since a few years. We also patched the header provided currently in thectp7_modules
in order to add new useful overloads. Therefore, I think we should package it ourself.Agreed
Since we have local modifications it could as well be relocated in the xhal
folder to avoid conflicts.
Imagine that a future system uses 64 bits words. It will require headers with
uint64_t
instead ofuint32_t
. This is usually decided at compile time, and a configured header gets written with the correct aliases in place (usually a.h
file from.h.in
). This way there is no need for error prone macro magic to detect the architecture, as it can be done inconfigure
. This is really a standard practice.
Sure, however, and I believe we had briefly discussed this at some point, what is our plan if we're on a "client" platform that is talking to two different "server" boards, one supporting 32-bit words, one supporting 64-bit words (which we can already emulate given that the GLIB
docker
can operate with 64-bit words (at the function level, not the register level)? Or how do we deal with providing "client" side software (running on different machines) for the same situation, even if the "client" is communicating only with one particular board type? This question is far more relevant and maybe I'm missing a step in how we achieve compatibility without having two versions of the "client" side code. Which headers of ctp7_modules
does cmsgemos
#include
?
Actually the top-level .so link should be provided by the
-devel
package, see egzlib-devel
.
Yeah, I understand that this is how many other packages do things, however, unless this -devel
package is going to provide all possible symlinks (even for the <board>-<package>-libs
packaged libraries), I'd much rather keep things simpler (maintenance wise)
Since we have local modifications it could as well be relocated in the
xhal
folder to avoid conflicts.
Actually, it currently lives there (in include/xhal/extern
), but we can change:
#include <lmdb++>
(which uses -I$(XHAL_ROOT)/include/xhal/extern
, and had been set up this way to take advantage of a system provided version if it became available without changing the code)
to
#include "xhal/extern/lmdb++.h"
with no special -I
besides the default -I
for xhal
development
In the current proposal (maybe the aforementioned "last proposed"?)
Right, the current proposal.
* all `xhal-<subpackage>` packages provide libraries and executables only, (relevant symlinks to SONAMEs are included) * all `xhal-<subpackage>-devel` packages provide **only** header files, and are **only** provided for the development PC, and currently has a dependency on the `xhal-<subpackage>` package, because the assumption is that one would need to link against the appropriate library (seems to be standard RPM practice) * `<board>-xhal-libs` provides the `<board>` specific libraries (and relevant symlinks), and installs them into the appropriate location (where they would be installed on the `<board>`) inside the `PETA_STAGE` tree, for the only stage we currently have, `/opt/gem-peta-stage/ctp7`
What I tried to describe was in fact very similar to the current proposal with the exception of the SONAME
's and header files:
xhal-<subpackage>.<architecture>
packages provide the libraries and executables for the targeted architecture. The files are installed in the appropriate path.xhal-<subpackage>-devel.<architecture>
packages provide the .so
symlinks and the header files for native compilation (only the DAQ machine? nothing prevents native compilation in the GLIB emulator though). It is installed only on the development machine and depends on the corresponding non -devel
package.<board>-xhal-<subpackage>-devel.noarch
packages provide all what is necessary for cross-compilation, libraries, symlinks and header files. It depends on the <board>-gem-sysroot-devel.noarch
package.Also, it is worth noticing that the package names are fully consistent with only prefixes and suffixes.
I view the
-devel
dependency on the parent package as obvious. Then, for instance, someone developingctp7_modules
would have an additional requirement (BuildRequires
) on all target<board>-xhal-libs
packages (I haven't included in this list anything that is running in thedocker
emulator because I didn't feel like duplicating the host libs, and this dir can be mounted into thedocker
container into the appropriate location as needed.)
Yeah, this seems logical. But is requiring <board>-xhal-libs
for ctp7_modules
enough? If I am to believe your new PR #133, <board>-xhal-libs
does not depend on the -devel
package and so the header files might not be installed.
Imagine that a future system uses 64 bits words. It will require headers with
uint64_t
instead ofuint32_t
. This is usually decided at compile time, and a configured header gets written with the correct aliases in place (usually a.h
file from.h.in
). This way there is no need for error prone macro magic to detect the architecture, as it can be done inconfigure
. This is really a standard practice.Sure, however, and I believe we had briefly discussed this at some point, what is our plan if we're on a "client" platform that is talking to two different "server" boards, one supporting 32-bit words, one supporting 64-bit words (which we can already emulate given that the
GLIB
docker
can operate with 64-bit words (at the function level, not the register level)? Or how do we deal with providing "client" side software (running on different machines) for the same situation, even if the "client" is communicating only with one particular board type? This question is far more relevant and maybe I'm missing a step in how we achieve compatibility without having two versions of the "client" side code. Which headers ofctp7_modules
doescmsgemos
#include
?
I think a better example for the headers would be the (not yet implemented) memhub
singleton/C++ class. On the CTP7, it would be based on libmemsvc
. However, the interface will probably be different on the new ATCA boards (and is already different on the GLIB). If we design properly the memhub
class, we could then change the implementation while keeping the same public API. Installing different headers is therefore needed.
Actually the top-level .so link should be provided by the
-devel
package, see egzlib-devel
.Yeah, I understand that this is how many other packages do things, however, unless this
-devel
package is going to provide all possible symlinks (even for the<board>-<package>-libs
packaged libraries), I'd much rather keep things simpler (maintenance wise)
I agree, providing all possible symlinks is definitely not the way to go. Hence my proposal to get a full-featured package for cross-compilation.
In the end we would get, for each supported cross-compiled architecture, only a few packages: the sysroot and the required libraries (e.g.
xerces
,lmdb
,log4cplus
andxhal
for the CTP7 and onlyxhal
for the GLIB). Ifxhal
is split in different packages on the remote targets, I would split the development packages as well in order to keep the naming scheme as clear as possible. The include paths would also be far much easier since only the--sysroot
compiler option would be required instead of multiple-I
directive. Another possibility (with some-I
directives though) would be to install the library in their remote architecture location in the sysroot so that usingRPATH
s would become straightforward.I don't think this is true, considering that even on a native compiler, one still has to provide explicitly the
-I
directives for any non-system libraries, and we're not going to be putting the GEM libraries into the system path, even in thePETA_STAGE
with the package specific<board>-<package>-libs
RPM
You are right, if the libraries are installed in non-system location, the -I
directives must still be provided. However, one can use the =
prefix to ensure that all libraries and header files come from the sysroot.
[snip]
What I tried to describe was in fact very similar to the current proposal with the exception of the
SONAME
's and header files:* The `xhal-<subpackage>.<architecture>` packages provide the libraries and executables for the targeted architecture. The files are installed in the appropriate path. * The `xhal-<subpackage>-devel.<architecture>` packages provide the `.so` symlinks and the header files for native compilation (only the DAQ machine? nothing prevents native compilation in the GLIB emulator though). It is installed only on the development machine and depends on the corresponding non `-devel` package.
For completeness, I treat "host" native compilation, and GLIB
emulator compilation (and packaging) as one and the same, hence the reason there is no glib-xhal-libs
package in my proposal.
* The `<board>-xhal-<subpackage>-devel.noarch` packages provide **all** what is necessary for cross-compilation, libraries, symlinks and header files. It depends on the `<board>-gem-sysroot-devel.noarch` package.
If we're going to put libraries in a -devel
package (which I'm not advocating for here, and actually am still not sold on having this proposed package), I'd get rid of this <board>-xhal-<blah>
business full stop, and just put all required development files in the xhal-<subpackage>-devel
package, as was my original proposal.
Also, it is worth noticing that the package names are fully consistent with only prefixes and suffixes.
I view the
-devel
dependency on the parent package as obvious. Then, for instance, someone developingctp7_modules
would have an additional requirement (BuildRequires
) on all target<board>-xhal-libs
packages (I haven't included in this list anything that is running in thedocker
emulator because I didn't feel like duplicating the host libs, and this dir can be mounted into thedocker
container into the appropriate location as needed.)Yeah, this seems logical. But is requiring
<board>-xhal-libs
forctp7_modules
enough? If I am to believe your new PR #133,<board>-xhal-libs
does not depend on the-devel
package and so the header files might not be installed.
Correct, downstream dependent packages, e.g., ctp7_modules
, would have:
Requires=xhal-common,xhal-server
(among others)BuildRequires=xhal-common-devel,xhal-server-devel,<board>-xhal-libs
(for all supported boards, among others)[snip]
I think a better example for the headers would be the (not yet implemented)
memhub
singleton/C++ class. On the CTP7, it would be based onlibmemsvc
. However, the interface will probably be different on the new ATCA boards (and is already different on the GLIB). If we design properly thememhub
class, we could then change the implementation while keeping the same public API. Installing different headers is therefore needed.
OK, I start to see better... If you already have a sketch of the GLIB
/CTP7
division, can you outline it here?
Actually the top-level .so link should be provided by the
-devel
package, see egzlib-devel
.Yeah, I understand that this is how many other packages do things, however, unless this
-devel
package is going to provide all possible symlinks (even for the<board>-<package>-libs
packaged libraries), I'd much rather keep things simpler (maintenance wise)I agree, providing all possible symlinks is definitely not the way to go. Hence my proposal to get a full-featured package for cross-compilation.
In the end we would get, for each supported cross-compiled architecture, only a few packages: the sysroot and the required libraries (e.g.
xerces
,lmdb
,log4cplus
andxhal
for the CTP7 and onlyxhal
for the GLIB). Ifxhal
is split in different packages on the remote targets, I would split the development packages as well in order to keep the naming scheme as clear as possible. The include paths would also be far much easier since only the--sysroot
compiler option would be required instead of multiple-I
directive. Another possibility (with some-I
directives though) would be to install the library in their remote architecture location in the sysroot so that usingRPATH
s would become straightforward.I don't think this is true, considering that even on a native compiler, one still has to provide explicitly the
-I
directives for any non-system libraries, and we're not going to be putting the GEM libraries into the system path, even in thePETA_STAGE
with the package specific<board>-<package>-libs
RPMYou are right, if the libraries are installed in non-system location, the
-I
directives must still be provided. However, one can use the=
prefix to ensure that all libraries and header files come from the sysroot.
~~I'm still failing to see how this will be any different from the current workflow, except for adding additional, target specific directory locations.
Regardless of specifying --sysroot=$(PETA_STAGE)
or not, I don't find a way (in gcc
) to make -I
/-L
directives relative to the sysroot
(obviously it can be manufactured by changing the makefile framework to add some $(SYSROOTPATH)
prefix to all package external includes and link dirs, so no need to mention "but, but, but cmake
!")~~
[edit]
e.g., if I try your suggestion (and from the gcc
manual, what in principle should work)
--sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include
for the Zynq
build I get:
arm-linux-gnueabihf-g++ -fomit-frame-pointer -pipe -fno-common -fno-builtin -Wall -std=c++14 -march=armv7-a -mfpu=neon -mfloat-abi=hard -mthumb-interwork -mtune=cortex-a9 -DEMBED -Dlinux -D__linux__ -Dunix -fPIC --sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include -std=gnu++14 -g -O2 -I/opt/xdaq/include -Iinclude -Iinclude/xhal/extern -c -MT arm/src/linux/arm/common/utils/XHALXMLParser.o -MMD -MP -MF arm/src/linux/arm/common/utils/XHALXMLParser.Td -o arm/src/linux/arm/common/utils/XHALXMLParser.o src/common/utils/XHALXMLParser.cpp
In file included from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/c++config.h:430:0,
from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/string:38,
from include/xhal/common/utils/XHALXMLParser.h:14,
from src/common/utils/XHALXMLParser.cpp:1:
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/os_defines.h:39:22: fatal error: features.h: No such file or directory
#include <features.h>
^
using the $SYSROOT
syntax instead allows the compilation to succeed, but the linker fails:
arm-linux-gnueabihf-g++ -std=gnu++14 -g -O2 -g -Wl,-soname,libxhal-base.so.3.2 -L$SYSROOT/lib -L$SYSROOT/usr/lib -L$SYSROOT/ncurses -shared -Larm/lib -o arm/lib/libxhal-base.so.3.2.2 arm/src/linux/arm/common/utils/XHALXMLParser.o arm/src/linux/arm/common/rpc/exceptions.o -llog4cplus -lxerces-c -lstdc++
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: cannot find -llog4cplus
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lxerces-c
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: skipping incompatible /lib/libm.so when searching for -lm
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: skipping incompatible /usr/lib/libm.so when searching for -lm
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: skipping incompatible /lib/libgcc_s.so.1 when searching for libgcc_s.so.1
/data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/4.9.2/../../../../arm-linux-gnueabihf/bin/ld: skipping incompatible /usr/lib/libgcc_s.so.1 when searching for libgcc_s.so.1
collect2: error: ld returned 1 exit status
though liblog4cplus.so
exists in $SYSROOT/lib
Oops, this is actually an old bug, since the linker wasn't passed the --sysroot
flag
OK, now I understand why we linked the version of, e.g., log4cplus
and xerces-c
to the xdaq
versions... the headers are not included in the PETA_STAGE
For completeness, I treat "host" native compilation, and
GLIB
emulator compilation (and packaging) as one and the same, hence the reason there is noglib-xhal-libs
package in my proposal.
Okay, that's good to know. I was working on the GLIB
Docker image and it is still not clear how convenient it will be to use the same packages in the GLIB
emulator (but it should be possible). However, can we guarantee that the compilation/development machine will share the same CC version as the GLIB
Docker image? If not, providing a sysroot for the GLIB, might be the way to way.
If we're going to put libraries in a
-devel
package (which I'm not advocating for here, and actually am still not sold on having this proposed package), I'd get rid of this<board>-xhal-<blah>
business full stop, and just put all required development files in thexhal-<subpackage>-devel
package, as was my original proposal.
Yes, of course, I'm not proposing 2 "board" packages; it would not make much sense. The suffix -devel
is only there because the package is intended for development.
I think a better example for the headers would be the (not yet implemented)
memhub
singleton/C++ class. On the CTP7, it would be based onlibmemsvc
. However, the interface will probably be different on the new ATCA boards (and is already different on the GLIB). If we design properly thememhub
class, we could then change the implementation while keeping the same public API. Installing different headers is therefore needed.OK, I start to see better... If you already have a sketch of the
GLIB
/CTP7
division, can you outline it here?
Currently, the GLIB
emulator reimplements the libmemsvc
API. This is a fragile solution, even more since the API is not fully documented.
I have different ideas in mind for refactoring the memhub
code in order to support the GLIB emulator (and the other boards), but I haven't tried them yet. While it should be possible to use inheritance with some king of factory, my currently preferred idea can be summarized with the following code snippet:
// CTP7
class MemHub
{
memsvc_t handle;
public:
void write(uin32_t address, uint32_t value);
};
// GLIB
class MemHub
{
ipbus_memsvc_t handle;
public:
void write(uin32_t address, uint32_t value);
};
// APx
class MemHub
{
new_memsvc_t handle;
public:
void write(uin64_t address, uint64_t value);
};
As you can see, the public interface is the compatible, while the private members are different to cope with different backends.
I also would like to stress out the type difference for the address
and value
for the APx
boards since these boards will probably use a 64-bits address space. Sure, a real implementation should typedef
the type, but this it meant to be able to use the same ctp7_modules
code for both 32-bits and 64-bits platforms.
~I'm still failing to see how this will be any different from the current workflow, except for adding additional, target specific directory locations. Regardless of specifying
--sysroot=$(PETA_STAGE)
or not, I don't find a way (ingcc
) to make-I
/-L
directives relative to thesysroot
(obviously it can be manufactured by changing the makefile framework to add some$(SYSROOTPATH)
prefix to all package external includes and link dirs, so no need to mention "but, but, butcmake
!")~
I agree with you that it is not extremely useful with the Makefile
framework. This is just my preferred way of cross-compiling software because it clearly separates between the host and the target root filesystems.
Since you are speaking about cmake
, creating a fully separated sysroot is extremely useful in this case. Indeed, one can tell cmake
where the sysroot is and it will only search headers and libraries is this directory.
[edit] e.g., if I try your suggestion (and from the
gcc
manual, what in principle should work)--sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include
for the
Zynq
build I get:arm-linux-gnueabihf-g++ -fomit-frame-pointer -pipe -fno-common -fno-builtin -Wall -std=c++14 -march=armv7-a -mfpu=neon -mfloat-abi=hard -mthumb-interwork -mtune=cortex-a9 -DEMBED -Dlinux -D__linux__ -Dunix -fPIC --sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include -std=gnu++14 -g -O2 -I/opt/xdaq/include -Iinclude -Iinclude/xhal/extern -c -MT arm/src/linux/arm/common/utils/XHALXMLParser.o -MMD -MP -MF arm/src/linux/arm/common/utils/XHALXMLParser.Td -o arm/src/linux/arm/common/utils/XHALXMLParser.o src/common/utils/XHALXMLParser.cpp In file included from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/c++config.h:430:0, from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/string:38, from include/xhal/common/utils/XHALXMLParser.h:14, from src/common/utils/XHALXMLParser.cpp:1: /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/os_defines.h:39:22: fatal error: features.h: No such file or directory #include <features.h> ^
features.h
is provided by the PETA_STAGE
. However, I think there is an error in the gcc
options so that the directory is is not picked up: -isysroot=/opt/gem-peta-stage/ctp7
should be -isysroot /opt/gem-peta-stage/ctp7
. Moreover, the following options are probably useless since they are the defaults: -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include
.
BTW, you probably know it, but you can always check that the include paths are those you expect with:
arm-linux-gnueabihf-g++ --sysroot=/opt/gem-peta-stage/ctp7 -E - -v < /dev/null
OK, now I understand why we linked the version of, e.g.,
log4cplus
andxerces-c
to thexdaq
versions... the headers are not included in thePETA_STAGE
Yes, these libraries have been installed manually in "our" PETA_STAGE
. We will probably have to package these libraries for the PETA_STAGE
(in addition to lmdb
).
For completeness, I treat "host" native compilation, and
GLIB
emulator compilation (and packaging) as one and the same, hence the reason there is noglib-xhal-libs
package in my proposal.Okay, that's good to know. I was working on the
GLIB
Docker image and it is still not clear how convenient it will be to use the same packages in theGLIB
emulator (but it should be possible). However, can we guarantee that the compilation/development machine will share the same CC version as theGLIB
Docker image? If not, providing a sysroot for the GLIB, might be the way to way.
"It is expected that the GLIB+docker
environment is identical to the "host" PC in all respects. No support shall hereby be given to any development done on GLIB+docker
under a non-native GEM DAQ OS supported architecture."
There, done :-)
I would even go so far as to say that the docker
image shouldn't contain anything other than the bare minimum to have the RPC service running, and every other library/exe should be included via a bind mount when the container is started.
[snip]
OK, I start to see better... If you already have a sketch of the
GLIB
/CTP7
division, can you outline it here?Currently, the
GLIB
emulator reimplements thelibmemsvc
API. This is a fragile solution, even more since the API is not fully documented.I have different ideas in mind for refactoring the
memhub
code in order to support the GLIB emulator (and the other boards), but I haven't tried them yet. While it should be possible to use inheritance with some king of factory, my currently preferred idea can be summarized with the following code snippet:// CTP7 class MemHub { memsvc_t handle; public: void write(uin32_t address, uint32_t value); }; // GLIB class MemHub { ipbus_memsvc_t handle; public: void write(uin32_t address, uint32_t value); }; // APx class MemHub { new_memsvc_t handle; public: void write(uin64_t address, uint64_t value); };
As you can see, the public interface is the compatible, while the private members are different to cope with different backends.
Actually, I see that the public interface is different, since you change the type, but OK, I understand the point. Though for the, admittedly incomplete, example you've given, I'd find some #ifdef
guards far more simple (whether to typedef
the type, or to guard the declaration of the handle
)
I also would like to stress out the type difference for the
address
andvalue
for theAPx
boards since these boards will probably use a 64-bits address space. Sure, a real implementation shouldtypedef
the type, but this it meant to be able to use the samectp7_modules
code for both 32-bits and 64-bits platforms.
Indeed, and I think getting this done correctly is the most important thing: how do (can?) we achieve maximum compatibility, where "ctp7_modules
" developed for the three different back-end types, all compile from the same source code, modulo minor architecture specific differences, and with all differences transparent to the code on the "client" side? I think we should have a plan for how to (if possible) achieve this as a part of this current framework redesign, even if we don't implement it yet for the future board type
[snip]
~I'm still failing to see how this will be any different from the current workflow, except for adding additional, target specific directory locations. Regardless of specifying
--sysroot=$(PETA_STAGE)
or not, I don't find a way (ingcc
) to make-I
/-L
directives relative to thesysroot
(obviously it can be manufactured by changing the makefile framework to add some$(SYSROOTPATH)
prefix to all package external includes and link dirs, so no need to mention "but, but, butcmake
!")~I agree with you that it is not extremely useful with the
Makefile
framework. This is just my preferred way of cross-compiling software because it clearly separates between the host and the target root filesystems.Since you are speaking about
cmake
, creating a fully separated sysroot is extremely useful in this case. Indeed, one can tellcmake
where the sysroot is and it will only search headers and libraries is this directory.
Ha! I did mention it, but only because I preemptively expected a comment ;-)
[edit] e.g., if I try your suggestion (and from the
gcc
manual, what in principle should work)--sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include
for the
Zynq
build I get:arm-linux-gnueabihf-g++ -fomit-frame-pointer -pipe -fno-common -fno-builtin -Wall -std=c++14 -march=armv7-a -mfpu=neon -mfloat-abi=hard -mthumb-interwork -mtune=cortex-a9 -DEMBED -Dlinux -D__linux__ -Dunix -fPIC --sysroot=/opt/gem-peta-stage/ctp7 -isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include -std=gnu++14 -g -O2 -I/opt/xdaq/include -Iinclude -Iinclude/xhal/extern -c -MT arm/src/linux/arm/common/utils/XHALXMLParser.o -MMD -MP -MF arm/src/linux/arm/common/utils/XHALXMLParser.Td -o arm/src/linux/arm/common/utils/XHALXMLParser.o src/common/utils/XHALXMLParser.cpp In file included from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/c++config.h:430:0, from /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/string:38, from include/xhal/common/utils/XHALXMLParser.h:14, from src/common/utils/XHALXMLParser.cpp:1: /data/bigdisk/sw/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/arm-linux-gnueabihf/include/c++/4.9.2/arm-linux-gnueabihf/bits/os_defines.h:39:22: fatal error: features.h: No such file or directory #include <features.h> ^
features.h
is provided by thePETA_STAGE
. However, I think there is an error in thegcc
options so that the directory is is not picked up:-isysroot=/opt/gem-peta-stage/ctp7
should be-isysroot /opt/gem-peta-stage/ctp7
. Moreover, the following options are probably useless since they are the defaults:-isysroot=/opt/gem-peta-stage/ctp7 -I=/usr/include -I=/include
.
OK, I removed them from the template (had been there historically) and everything seems happy, so I'll just leave them out.
BTW, you probably know it, but you can always check that the include paths are those you expect with:
arm-linux-gnueabihf-g++ --sysroot=/opt/gem-peta-stage/ctp7 -E - -v < /dev/null
OK, now I understand why we linked the version of, e.g.,
log4cplus
andxerces-c
to thexdaq
versions... the headers are not included in thePETA_STAGE
Yes, these libraries have been installed manually in "our"
PETA_STAGE
. We will probably have to package these libraries for thePETA_STAGE
(in addition tolmdb
).
Indeed, this was why I had hoped that this stage could be provided by UW, and also have it include the headers of all the extra libraries we requested, even though these won't be present on the CTP7 itself. I'll send a message to Jes to see if something like this could be arranged
"It is expected that the
GLIB+docker
environment is identical to the "host" PC in all respects. No support shall hereby be given to any development done onGLIB+docker
under a non-native GEM DAQ OS supported architecture."There, done :-)
Okay, duly noted! One more question though: which CentOS version should be used for the GLIB image? Since CentOS 8 is going to be released next week, the DAQ machine will probably progressively be upgraded.
I would even go so far as to say that the
docker
image shouldn't contain anything other than the bare minimum to have the RPC service running, and every other library/exe should be included via a bind mount when the container is started.
Indeed. I assume that when you say "every other library/exe", it means what is currently installed into /mnt/persistent
?
As you can see, the public interface is the compatible, while the private members are different to cope with different backends.
Actually, I see that the public interface is different, since you change the type, but OK, I understand the point. Though for the, admittedly incomplete, example you've given, I'd find some
#ifdef
guards far more simple (whether totypedef
the type, or to guard the declaration of thehandle
)
Yes, #ifdef
guards would work and be simpler. This would however require to pass the right macros to the compiler for every library/executable build. Macros could also change in the long term and would require fixes in the build system of every dependent binary.
In the end, this is one possible use case, maybe not the best one, of installing the headers in the PETA_STAGE
. More fundamentally, from my experience in cross-compilation, clearly separating the target system files helps in maintainability and avoids nasty bugs. Independent sysroot is a good way of achieving the separation if Debian style packaging (which is the best I've ever seen) is not available.
Also, have we considered development flow for a developer working in her local area? If one wants/has to develop for a XHAL
/... version which is not the one currently installed, how easy would it be? I don't have a definitive answer to the question nor a complete overview of the changes planned in all repositories, but I think it is worth raising the question. It is weakly related to the PETA_STAGE
question, but it still is since it may ease or worsen this specific (but important) use case.
I also would like to stress out the type difference for the
address
andvalue
for theAPx
boards since these boards will probably use a 64-bits address space. Sure, a real implementation shouldtypedef
the type, but this it meant to be able to use the samectp7_modules
code for both 32-bits and 64-bits platforms.Indeed, and I think getting this done correctly is the most important thing: how do (can?) we achieve maximum compatibility, where "
ctp7_modules
" developed for the three different back-end types, all compile from the same source code, modulo minor architecture specific differences, and with all differences transparent to the code on the "client" side? I think we should have a plan for how to (if possible) achieve this as a part of this current framework redesign, even if we don't implement it yet for the future board type
Without knowing how the hardware access will be handled on the new boards, I think it is rather difficult to design a proper interface. As said during the meeting, this would probably be better done in a second migration step once the information about the new ATCA board will be more widely available.
Yes, these libraries have been installed manually in "our"
PETA_STAGE
. We will probably have to package these libraries for thePETA_STAGE
(in addition tolmdb
).Indeed, this was why I had hoped that this stage could be provided by UW, and also have it include the headers of all the extra libraries we requested, even though these won't be present on the CTP7 itself. I'll send a message to Jes to see if something like this could be arranged
I'm not sure it is a good idea to provide the the extra headers and libraries in the official stage while not providing the corresponding libraries in the CTP7 sysroot. I have the feeling it would only lead to confusion if something, library or executable, can be compiled without any additional package, but cannot run on the CTP7 without the any additional package. Also, if UW agrees to provide a new stage, it should be quite easy for them to provide a new sysroot since the libraries would already be compiled.
"It is expected that the
GLIB+docker
environment is identical to the "host" PC in all respects. No support shall hereby be given to any development done onGLIB+docker
under a non-native GEM DAQ OS supported architecture." There, done :-)Okay, duly noted! One more question though: which CentOS version should be used for the GLIB image? Since CentOS 8 is going to be released next week, the DAQ machine will probably progressively be upgraded.
They will, but cc7
will be the baseline for QC8 operations until GE11 is done
At P5 we will migrate to cc8
when the sysadmins migrate the machines, I think not until mid/late 2020 at the earliest, as it also partly depends on how the xdaq team rolls out xdaq15
.
All this to say that I don't see a strong reason to push this docker image to cc8
until at the earliest next year.
I would even go so far as to say that the
docker
image shouldn't contain anything other than the bare minimum to have the RPC service running, and every other library/exe should be included via a bind mount when the container is started.Indeed. I assume that when you say "every other library/exe", it means what is currently installed into
/mnt/persistent
?
Yeah, I think so, since you wanted to keep the docker
filesystem identical to the zynq
, bind mounting /opt/xhal/lib
into the /mnt/persistent/gemdaq/lib
(or wherever) volume should be straightforward. With this point, I think it definitely makes sense to keep the image OS version tied to the running host OS version.
As you can see, the public interface is the compatible, while the private members are different to cope with different backends.
Actually, I see that the public interface is different, since you change the type, but OK, I understand the point. Though for the, admittedly incomplete, example you've given, I'd find some
#ifdef
guards far more simple (whether totypedef
the type, or to guard the declaration of thehandle
)Yes,
#ifdef
guards would work and be simpler. This would however require to pass the right macros to the compiler for every library/executable build. Macros could also change in the long term and would require fixes in the build system of every dependent binary.In the end, this is one possible use case, maybe not the best one, of installing the headers in the
PETA_STAGE
. More fundamentally, from my experience in cross-compilation, clearly separating the target system files helps in maintainability and avoids nasty bugs. Independent sysroot is a good way of achieving the separation if Debian style packaging (which is the best I've ever seen) is not available.
I've actually come to the view that thinking about this problem/question as a purely "cross-compilation" issue is the wrong mindset. Our use case is not (completely) a "traditional" cross-compilation, i.e., building the same library to run independently/standalone on a different architecture. Cross-compilation is really only (the easy) half of the problem, the second, and much more important part is the compatibility of the interface between the host PC and the various remote targets (though this second part is tightly coupled with your further point about future boards)
Also, have we considered development flow for a developer working in her local area? If one wants/has to develop for a
XHAL
/... version which is not the one currently installed, how easy would it be? I don't have a definitive answer to the question nor a complete overview of the changes planned in all repositories, but I think it is worth raising the question. It is weakly related to thePETA_STAGE
question, but it still is since it may ease or worsen this specific (but important) use case.
For this, changes that are in my latest PR will allow for anyone to specify an install location and execute INSTALL_PREFIX=/path/to/local/install make install
for any of the packages. One can then override the, e.g., XHAL_ROOT
that would point to /opt/xhal
to their user specified location $INSTALL_PREFIX/opt/xhal
.
For the PETA_STAGE
stuff, one would probably have to set up symlinks to the system provided locations, but the package installed items should wind up in the INSTALL_PREFIX
tree.
This second point is part of why I am wary of putting the devel headers/libs into the PETA_STAGE
, but it can be worked around as mentioned.
I also would like to stress out the type difference for the
address
andvalue
for theAPx
boards since these boards will probably use a 64-bits address space. Sure, a real implementation shouldtypedef
the type, but this it meant to be able to use the samectp7_modules
code for both 32-bits and 64-bits platforms.Indeed, and I think getting this done correctly is the most important thing: how do (can?) we achieve maximum compatibility, where "
ctp7_modules
" developed for the three different back-end types, all compile from the same source code, modulo minor architecture specific differences, and with all differences transparent to the code on the "client" side? I think we should have a plan for how to (if possible) achieve this as a part of this current framework redesign, even if we don't implement it yet for the future board typeWithout knowing how the hardware access will be handled on the new boards, I think it is rather difficult to design a proper interface. As said during the meeting, this would probably be better done in a second migration step once the information about the new ATCA board will be more widely available.
Agreed :+1:
Yes, these libraries have been installed manually in "our"
PETA_STAGE
. We will probably have to package these libraries for thePETA_STAGE
(in addition tolmdb
).Indeed, this was why I had hoped that this stage could be provided by UW, and also have it include the headers of all the extra libraries we requested, even though these won't be present on the CTP7 itself. I'll send a message to Jes to see if something like this could be arranged
I'm not sure it is a good idea to provide the the extra headers and libraries in the official stage while not providing the corresponding libraries in the CTP7 sysroot. I have the feeling it would only lead to confusion if something, library or executable, can be compiled without any additional package, but cannot run on the CTP7 without the any additional package. Also, if UW agrees to provide a new stage, it should be quite easy for them to provide a new sysroot since the libraries would already be compiled.
I think we have a misunderstanding on what I was suggesting. The extra (here "extra" just means the things we use and want but aren't changing, e.g., log4cplus
or xerces-c
) libraries must appear in the CTP7 running image, I'm only saying that if UW doesn't want to put all the devel headers in the CTP7 running image, then the linux-stage
tarball could (and in my view, should) be set up to include them; effectively, the linux-stage
tarball should be the package
+ package-devel
for the CTP7 (or future board)
Okay, duly noted! One more question though: which CentOS version should be used for the GLIB image? Since CentOS 8 is going to be released next week, the DAQ machine will probably progressively be upgraded.
They will, but
cc7
will be the baseline for QC8 operations until GE11 is done At P5 we will migrate tocc8
when the sysadmins migrate the machines, I think not until mid/late 2020 at the earliest, as it also partly depends on how the xdaq team rolls outxdaq15
. All this to say that I don't see a strong reason to push this docker image tocc8
until at the earliest next year.
Okay, it will be a CC7
-based container for now then. Nevertheless, the host system should not matter with a Docker image, so I would prefer to install the host xHAL
packages into the Docker image instead of mount binding them. One more thing, RHEL/CentOS 8 does not ship Docker, but the Podman suite. Since both tools support the OCI standard they should be compatible, but it will require some testing.
I would even go so far as to say that the
docker
image shouldn't contain anything other than the bare minimum to have the RPC service running, and every other library/exe should be included via a bind mount when the container is started.Indeed. I assume that when you say "every other library/exe", it means what is currently installed into
/mnt/persistent
?Yeah, I think so, since you wanted to keep the
docker
filesystem identical to thezynq
, bind mounting/opt/xhal/lib
into the/mnt/persistent/gemdaq/lib
(or wherever) volume should be straightforward. With this point, I think it definitely makes sense to keep the image OS version tied to the running host OS version.
Indeed, I really would like to the container image filesystem structure to be identical to the Zynq
image because of the scripts called though SSH. Once they will be removed/ported to the RPC framework, using a more regular filesystem organization should be preferred. Also, after reflection, I was planning to install the xHAL
packages in their standard location (in /opt
) in the container. Building a new container to support new xHAL
versions is trivial (especially if there is a CI/CD). And the developer wouldn't mind if her/his package installation is not persistent.
I've actually come to the view that thinking about this problem/question as a purely "cross-compilation" issue is the wrong mindset. Our use case is not (completely) a "traditional" cross-compilation, i.e., building the same library to run independently/standalone on a different architecture. Cross-compilation is really only (the easy) half of the problem, the second, and much more important part is the compatibility of the interface between the host PC and the various remote targets (though this second part is tightly coupled with your further point about future boards)
Okay, I see now. After a few tests, and without increasing the number of packages (e.g. adding a xhal-<subpackage>-headers
package), I cannot think of anything better than the current proposal. The compatibility issues will always remain at runtime where packages might not be up-to-date on the different systems.
Also, have we considered development flow for a developer working in her local area? If one wants/has to develop for a
XHAL
/... version which is not the one currently installed, how easy would it be? I don't have a definitive answer to the question nor a complete overview of the changes planned in all repositories, but I think it is worth raising the question. It is weakly related to thePETA_STAGE
question, but it still is since it may ease or worsen this specific (but important) use case.For this, changes that are in my latest PR will allow for anyone to specify an install location and execute
INSTALL_PREFIX=/path/to/local/install make install
for any of the packages. One can then override the, e.g.,XHAL_ROOT
that would point to/opt/xhal
to their user specified location$INSTALL_PREFIX/opt/xhal
. For thePETA_STAGE
stuff, one would probably have to set up symlinks to the system provided locations, but the package installed items should wind up in theINSTALL_PREFIX
tree. This second point is part of why I am wary of putting the devel headers/libs into thePETA_STAGE
, but it can be worked around as mentioned.
Ok, great! I guess I'll have to wait the changes to be propagated in all repositories, but I'll definitively test (and use) the local installation. Also, do you have any reason why not to use the standard variables PREFIX
and DESTDIR
in the Makefiles?
I think we have a misunderstanding on what I was suggesting. The extra (here "extra" just means the things we use and want but aren't changing, e.g.,
log4cplus
orxerces-c
) libraries must appear in the CTP7 running image, I'm only saying that if UW doesn't want to put all the devel headers in the CTP7 running image, then thelinux-stage
tarball could (and in my view, should) be set up to include them; effectively, thelinux-stage
tarball should be thepackage
+package-devel
for the CTP7 (or future board)
Indeed, I misunderstood what you were suggesting. I totally agree with your suggestion (if there is support from UW).
Irrelevant with the new mono-repository structure and simplified RPM scheme.
Brief summary of issue
Starting an issue based on discussion in #131 regarding structure of the package.
Types of issue
Current Behavior
Currently,
xhal
is a package that provides libraries for both PC and AMC. The libraries have some differences in the symbols, and thearm
libraries (and anyarm
specific headers) need to be present on the PC during development for linking during the cross compilation step. The current mechanism to do this ensures that thedevel
RPM package is only built after thearm
package is compiled. It then puts thearm
library into theRPMBUILD
tree in axhal/lib/arm
folder that can be included in the linker path when compiling forarm
onx86_64
Now that there are also headers that have only been placed in thearm
package, these would also need some special packaging, possibly a similar copy mechanism intoinclude/arm/xhal
This issue is designed to investigate possible alternate solutions and adopt a uniform policy for any other package that will also provide libraries/headers across architectures (
ctp7_modules
will be one that only provides the headers)