machinekit / machinekit-hal

Universal framework for machine control based on Hardware Abstraction Layer principle
https://www.machinekit.io
Other
109 stars 62 forks source link

Build system needs replacing #200

Closed zultron closed 2 years ago

zultron commented 5 years ago

There have been a number of discussions about replacing the build system, but I can't seem to find an issue for it. (Apologies for the length; deficiencies are probably overly-detailed after I've encountered scepticism about whether the current build system in fact even has any major problems.)

The problem

The build system is decrepit and needs replacing. The problems of the build system can be explained in the context of its origins.

The end result, the build system of today, is very difficult to modify. It's difficult to understand the structure of dependencies, files and variables in order to know how to introduce the change at all, much less where to add it so that it doesn't make the structure even more disorganized and hard to understand for the next person.

The only way to fix it at this point is by replacing it. Fortunately, this will be easier today than ever before. The split machinekit-hal repository reduces the size of the task. The upcoming per-flavor module merge will simplify build rules. Eliminating kernel threads also eliminates the requirement to support kbuild. And today there are two options that are already 50% complete.

Possible build system replacements

Among alternatives, these make the best sense, given the enormous head start, and given that both are widely-used and well-supported in general, and that the Machinekit community has more or less experience with both.

ArcEye commented 5 years ago

The time to do it would certainly be after the merge of the mk-single-modules-dir work. We have already removed the majority of the kbuild and other historical clutter and you have removed the recursive for-each-flavor build.

CMake is not perfect, but is widely adopted and understood and does come with a lot of useful macro functions.

It would also be a good time to start at the beginning, create a build from the ground up, instead of just trying to fit the existing Submakefiles into CMakeLists.txt files.

If all binaries and libs were built within their respective directories, instead of objects and deps being moved to temporary dirs and built en-masse with deceptively short commands, the whole build process would be a lot easier to understand and alter if required. (obviously some mechanism will be required where output from one dir is a dep of another etc)

cerna commented 4 years ago

I started looking into this issue in connection with machinekit/machinekit-hal#268 - as one deeply affect the other and this has been such a long-standing problem that it is probably high time to solve this. (Plus making head and tails of the Makefile/Configure system is a quite harrowing task now.)

Currently, there is open the #250 pull request, which implements new CMake build system for Machinekit-HAL. However, it has several problems:

So all in all I think the best course of action forward it to gut #250 for the good parts and assemble new pull request.


To the point of specific technologies, I think there are only two viable alternatives: Modern CMake and Meson. By Modern CMake I mean something post 3.10. Debian distributes 3.16 as far as I know in its backports repositories, but given the availability of Snap and Pip distributed CMake in (always) the latest version, getting later version is not an issue - which is why I would suggest developing it for the latest 3.18 version first and then backporting it to lower ones where viable (I think there is nothing really serious which would prevent this).

The Meson option is possibility because it is under continuous development, the use base is growing and recently some heavy hitters started using it - so the changes it will stay there for prolonged period of time is quite high. The syntax is a bit better than CMake, I think (but there recently was discussion about new, more deterministic looking syntax in CMake community which would make this moot point), however it is a newer product and as such many issues which CMake had to solve already are so far unsolved (like the global-only namespace and such).

CMake also supports both Make and Ninja, whereas Meson only supports Ninja.

Anything else seems to be on the way out (namely the Autotools) with shrinking user base (primarily to the Meson and CMake).


Additionally, I think it is time to do some reorganizing of the directory tree structure of Machinekit-HAL repository. (Maybe at the same time as a general code-formatting - I am not so sure how well git is at moving files, so one breaking change is better than two.) Because - at least to me - it looks (in some aspects) a bit messy. And this represents another barrier for first time developers. (Or at least it was additional entry barrier for me when starting out.) CMake doesn't impose any specific rules on the structure of repository in relation to build directory structure or installation directory structure. In other words, targets can output its artifacts to the same directory (flattening the original structure) and install wherever they want (additionally flatten the structure, for example [or deepening it arbitrarily]).

What I have in mind:

From what I have written, I am talking only in terms of targets. Which is the Modern CMake approach (or at least they claim it in tutorials) and the reason why the 2.8 branches mentioned in first post of this issue are very probably useless.

cerna commented 4 years ago

Looking some more into this problem I came up with few points which I would like to discuss here in the open (better now than later):

When first encountering Machinekit's source code, for me at least it was, it is quite confusing to get what uses what and how is all connected together. Basically how the application is structured. It doesn't seem very transparent. There is the high abstraction of RTAPI and HAL, however (probably because of non-existent documentation) it's not clear how from that you get the binaries (or if you want to do some changes, where to put them). So I have been thinking about toning down a bit the high abstraction and structure the code more to the point of the output. (In no way I want to remove the HAL or RTAPI (runtime) differentiation.) More to the point of having:

src/modules
src/libraries
src/applications

Where src/modules would be for module type of library - what you dlopen or modprobe or so. src/libraries will be for the normal libraries - shared, archive, object collections. src/applications then for runnable binaries.

So then would be:

src/modules/hal/components/and2v2
src/modules/hal/drivers/hostmod2 (maybe branch Hostmod2 somehow too)
src/modules/(kernel|hal)/shmdrv
src/modules/hal/components/examples/icomp
src/libraries/(some tree structure)/halcmd
src/libraries/(some tree structure)/rtapi_math
src/libraries/(some tree structure)/mkini
src/applications/(some tree structure)/pci_read
src/applications/(some tree structure)/inivar
src/applications/(some tree structure)/rtapi_app

And so on. In other words, I think the HAL is lot more comprehensible when one sees that it is a just bunch of libraries dlopened into some process memory. When one sees that Machinekit(-HAL) is just a rtapi_app and rtapi_msgd running together.

The question is what to do about so-called userspace components (I use the so-called because even though I know the LinuxCNC history behind that term, I consider it in terms of Modern Machinekit quite terrible name which has potential to send some poor soul on cyclical Google Quest where nothing will make sense), these are basically executable which are _fork'n'exec_ed into new process. Problem is, you cannot execute them as is. Or better yet, you can, but some strange behaviour will happen. So not executables and not modules, per se. But I think these are nearer to modules than to executables. So the simplest thing I can come up with is to put them into src/modules and internally compile them as a shared library (which with relocatable code should still do the same).


When installing, Machinekit puts its code into usr/bin and usr/libexec/machinekit (plus others which now are not that important). The former is in PATH, the latter is not. The FSH says about libexec:

/usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/libexec.

Applications which use /usr/libexec in this way must not also use /usr/lib to store internal binaries, though they may use /usr/lib for the other purposes documented here.

Which is fine. Problem is, Machinekit uses it for executables like rtapi_app or rtapi_msgd which have help and you can pretty much run them from terminal. (And Machinekit is calling them from scripts!)

Then there are in usr/bin programs which cannot be run outside halcmd (I am talking about you, independently scheduled components, in other circles know like userspace components).

To me, this sounds like inconsistency.

Well, more tomorrow.

zultron commented 4 years ago

I see (I think you do too) multiple issues mashed together here. First, about directory structure:

src/modules
src/libraries
src/applications

Where src/modules would be for module type of library - what you dlopen or modprobe or so. src/libraries will be for the normal libraries - shared, archive, object collections. src/applications then for runnable binaries.

I'm all for reorganizing, and this top-level structure makes sense to me: simple and obvious.

src/modules/hal/components/and2v2
src/modules/hal/drivers/hostmod2 (maybe branch Hostmod2 somehow too)
src/modules/(kernel|hal)/shmdrv
src/modules/hal/components/examples/icomp
src/libraries/(some tree structure)/halcmd
src/libraries/(some tree structure)/rtapi_math
src/libraries/(some tree structure)/mkini
src/applications/(some tree structure)/pci_read
src/applications/(some tree structure)/inivar
src/applications/(some tree structure)/rtapi_app

Nits:

This looks like a fine idea to me, and it fits well into a build system replacement project.


About installation locations:

/usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/libexec.

Which is fine. Problem is, Machinekit uses it for executables like rtapi_app or rtapi_msgd which have help and you can pretty much run them from terminal. (And Machinekit is calling them from scripts!)

Definitely fix that. This could be addressed in this issue as part of a new build system, or in a new issue, implemented in the current or a future build system.

Then there are in usr/bin programs which cannot be run outside halcmd (I am talking about you, independently scheduled components, in other circles know like userspace components).

Is this question primarily about installation locations, or is it more about the C-language userspace comps issue (below)?


About C-language userspace comps:

The question is what to do about so-called userspace components (I use the so-called because even though I know the LinuxCNC history behind that term, I consider it in terms of Modern Machinekit quite terrible name which has potential to send some poor soul on cyclical Google Quest where nothing will make sense), these are basically executable which are _fork'n'exec_ed into new process. Problem is, you cannot execute them as is. Or better yet, you can, but some strange behaviour will happen. So not executables and not modules, per se. But I think these are nearer to modules than to executables. So the simplest thing I can come up with is to put them into src/modules and internally compile them as a shared library (which with relocatable code should still do the same).

Here, you're talking about the C-language binary userspace comps, not the Python-language script userspace comps. The change you're proposing sounds like you mean to stop fork()ing off these comps, and instead dlopen() them and running the main() function in a new (non-RT, I guess) thread within rtapi_app. Is that correct?

Would we lose support for independently-executable C-language userspace comps? Does EMC build any of those (I haven't checked)? Will that prevent any exotic use cases, such as building a HAL comp into a plugin (.so module) for another application?

What's the difference that C-language userspace comps can't run independently, whereas Python-language comps can?

How will C-language userspace comps be run from HAL, if not from loadusr()? (Obviously not from loadrt().)

Why is this problem of C-language userspace comps part of this issue? I can't tell why changing how those are loaded needs to be done in concert with replacing the build system. I'm absolutely not trying to shoot down the idea; in fact, I think it sounds interesting, creative, and outside the box. I'm just trying to understand exactly what it is and how it fits along with everything else.

cerna commented 4 years ago

I see (I think you do too) multiple issues mashed together here.

I do. Certainly. The problem is, to get green tests and build packages (and stay inside C4 mandated rules) I need to solve multitude of different thing at once - as they are connected together and this is a quite big endeavour.

Hopefully keep things flat as possible; src/modules/ contains little other than hal/, so just remove that directory level.

I would love to minimize occurrences of "I should have implemented it differently" in the future to minimum, so I have been thinking that tree structure with un-branching nodes might be helpful in this.

Particularly I was theoretizing about @the-snowwhite's FPGA HAL idea and that to allow AXI connection between FPGA and ARM core, one will need kernel-space originating memory block and atomic access functions. Hence, kernel module.

I also am thinking that there is no reason, why Machinekit-HAL cannot combine kernel-space modules with user-space ones. (For example for use with parport and co-kernel real-time system. Similar how the triple buffer components are implemented now.) But then it would probably use user-space module as a controlling one. But still, kernel-space memory origin and the actual active kernel module/driver would be needed.

Bottom line is, it is for possibility of future expansion. But I can try to keep things flat.

(I would like to avoid something like current situation in src/hal, which I consider extremely confusing.)

I think of rtapi_app and others more as "executables" than "applications".

No problem, I took it from the suffix from rtapi_app.

Where does the stuff in lib/python/ go? Maybe src/python? I've always thought it was out of place in its current location.

I look at the three main folders (executables, libraries and modules) not from C/C++ viewpoint, but more from Machinekit's viewpoint. Maybe I wasn't completelly clear, but I want every smallest self-contained piece to be in its own folder created with add_subdirectory, so with its own variable scope and such. In this scope will be its own CMakeLists.txt which will implement all logic in hopefully OOP fashion for given piece. So the Python code will be in src/libraries if it is some kind of library (it will be just library for other python code, what is the correct nomenclature in Python, module?) or in src/executables if it is an executable program. (Lets call executable something which has its own process.)

I would like to avoid creating language specific directories as this can limit what languages can be added in the future. (And it again presents the "I should have implemented it differently" problem.)

Where does stuff in scripts go?

The scripts/ directory is now simply melting pot for everything which is not a C or C++ code (or Python). More so with me adding the Docker build scripts into this mess.

I think scripts which are programs (or are used as programs) should go to src/executables. I do think that with generator expressions suitable custom targets can created which will generate useable shell executables for both build stage folder and installation. (It's mainly just path substitution anyway.)

Shell script which are sourced in other scripts or exports functions for other shell scripts (I am not sure now if those even exists) are kind of like library and I think should be put into src/libraries.

Not sure where to put the Docker stuff.

I have already filtered off the stuff for Debian package build to debian/ folder - I think this approach should be applied to all potential distributions Machinekit will target in the future. (Like redhat/ for RHEL/Fedora/etc and so - if it is possible.)

I am also not sure where to put the assembler stuff (mainly as I don't read assembler, I do not exactly know what it is doing).

Is this question primarily about installation locations, or is it more about the C-language userspace comps issue (below)?

This question was mainly about install locations and how the targets should be build. I was thinking that these programs (userspace components) should not be in $PATH - given that they cannot be run and cause terminal freezing for example. On other hand, the rtapi_app should be in path. The userspace components should somehow signal that they are modules and not executables. So I was thinking of cheating solution (I wasn't precise enough in my previous post, sorry): if you create shared library (.so) compiled with -pie -fPIC and linked with -Wl,-E, you get shared library you can link against but also one you can normally run and execv() - not sure if there isn't some Machinekit gotcha why it is not possible.

That way it would be almost similar, but still not a program.

And nobody is trying to use shared library (.so file) as an executable. So it will clearly signal that you should not try to run. (Even though you can. But I don't think many people will try to run it.)


Here, you're talking about the C-language binary userspace comps, not the Python-language script userspace comps. The change you're proposing sounds like you mean to stop fork()ing off these comps, and instead dlopen() them and running the main() function in a new (non-RT, I guess) thread within rtapi_app. Is that correct?

Even through I am afflicted with serious case of Iwouldhaveimplementeditdifferentia, I consider this out of scope for this issue. In the future, I would like to implement multiple rtapi_app processes for one instance, with different possible flavours which can run at one - one would always be a Vanilla POSIX where non-real-time components would run in threads same as real-time ones - basically enhancement on the current not-used solution with _SCHEDOTHER threads. But that is future talk.

Would we lose support for independently-executable C-language userspace comps? Does EMC build any of those (I haven't checked)? Will that prevent any exotic use cases, such as building a HAL comp into a plugin (.so module) for another application?

No. I hope not. I have no idea. Build HAL component will hopefully become task done by adding some INTERFACE target_link_libraries() or maybe using some CMake module function.

What's the difference that C-language userspace comps can't run independently, whereas Python-language comps can?

One of my wants is to have as language agnostic solution as possible. With normally scheduled components, I think it is possible. But I think you wrongly understood me here. I don't want to do this big of change now.

How will C-language userspace comps be run from HAL, if not from loadusr()? (Obviously not from loadrt().)

I won't be any different. Loadusr() function will still be there. But if we are talking about the multi _rtapiapp idea, the type of thread will be the differentiating factor.

Why is this problem of C-language userspace comps part of this issue?

It really isn't. The only part of this issue - and relevant to this issue - is how to build the userspace components targets.

cerna commented 4 years ago

Some more ideas/questing I have about the CMake switch:

The current build system rules specify some compile flags, like DEBUG or turning off optimization for given objects. I am thinking how important these are for the HAL/RTAPI/real-time/whatever targets. Draft #250 does not implement logic in relation to CMAKE_BUILD_TYPES (or at least I did not see it), but I think it is quite important.

As a precaution, I would disallow configuring without specifying the CMAKE_BUILD_TYPE, respective specifying the "" as a build type.

The defaults on my computer are:

CMAKE_C_FLAGS_DEBUG is -g
CMAKE_C_FLAGS_RELEASE is -O3 -DNDEBUG
CMAKE_C_FLAGS_RELWITHDEBINFO is -O2 -g -DNDEBUG
CMAKE_C_FLAGS_MINSIZEREL is -Os -DNDEBUG
CMAKE_CXX_FLAGS_DEBUG is -g
CMAKE_CXX_FLAGS_RELEASE is -O3 -DNDEBUG
CMAKE_CXX_FLAGS_RELWITHDEBINFO is -O2 -g -DNDEBUG
CMAKE_CXX_FLAGS_MINSIZEREL is -Os -DNDEBUG

There is of course possibility to change them all or change target specific flags. However, which targets should have what? (I would probably copy the optimization levels from current build system for real-time related targets at least).


Machinekit-HAL's libraries now use weird combination of naming rules - libmkini.so, libmtalk.so, libmachinekitsomething.so etc. I like things with order, so I would vote for renaming all libraries to libmachinekitsomething.so.


How it is with Debian packages and CMake target scripts? Looking through packages, I cannot see any WhateverConfig.cmake or WhateverVersion.cmake in the installed files. And for example ZeroMQ has one.

CMake has ability to automagically create these files from targets when installing. I think this functionality should be helpful when converting Machinekit-CNC to use the CMake build system too.

However, how these should be installed in .deb?


Draft #250 is big on compiling the C/C++ files into object libraries, which are basically collections of .o object files which are then depended on to executable and library target (and because of using 3.0.2 minimal version of CMake, it uses the older syntax with generator expressions). Question is, is it done because the current build system is doing it this way or is it a design which has some (for me hidden) advantages or benefits?

Because I would prefer to specify (and compile) sources directly in targets and if some targets use the same source file (like pci_read and pci_write), then compile them into special archive library (which then won't be installed, but only used in building process).

zultron commented 4 years ago

In your last reply to my comment, +1 on everything. I like the thinking.

Here, you're talking about the C-language binary userspace comps, not the Python-language script userspace comps. The change you're proposing sounds like you mean to stop fork()ing off these comps, and instead dlopen() them and running the main() function in a new (non-RT, I guess) thread within rtapi_app. Is that correct?

[...] I consider this out of scope for this issue.

Good. My questions that followed this one all assumed otherwise, and are therefore moot. Sorry to make you answer them.

zultron commented 4 years ago

Draft #250 is big on compiling the C/C++ files into object libraries, which are basically collections of .o object files which are then depended on to executable and library target (and because of using 3.0.2 minimal version of CMake, it uses the older syntax with generator expressions). Question is, is it done because the current build system is doing it this way or is it a design which has some (for me hidden) advantages or benefits?

Because I would prefer to specify (and compile) sources directly in targets and if some targets use the same source file (like pci_read and pci_write), then compile them into special archive library (which then won't be installed, but only used in building process).

Maybe @kinsamanka could chime in here, but my understanding is the CMake implementation in #250 tried to implement the inner workings of the current build system very faithfully. I would prefer the new build system to follow the usual CMake conventions, like specifying the sources of a target and letting CMake figure out how to compile and link it; after all, that's what CMake is good at.

Related to this, #250 also faithfully reimplements the RIP build. One of the opportunities I saw in redoing the build system was getting rid of that forever. CMake should be able to build such that after mkdir build && cd build && cmake .. && make, a project should be able to run right out of the build/ directory. Then a following make install will install to $prefix with no need to reconfigure or rebuild. This is not just convenient for development, but it also enables running tests as part of the package build; that way, CI builds packages and runs tests in a single step, much simpler than the current workflow.

kinsamanka commented 4 years ago

The key issue that bugged me with the CMake conversion is on how to keep the git history intact. I've tried to keep the changes less intrusive so that a simple rebase is all that's needed to keep it updated.

250 closely follows the original build without the redundant steps and cleaned up some unnecessary linking. This is basically a RIP build; make install just modifies the RPATH variable before installation.

cerna commented 4 years ago

@kinsamanka, thank you for chiming in.

The key issue that bugged me with the CMake conversion is on how to keep the git history intact.

You mean blameability by this?

I've tried to keep the changes less intrusive so that a simple rebase is all that's needed to keep it updated.

Machinekit is now much more calm water than it was in the past. So I am just planning to do big bang switch and be done with it.

cerna commented 3 years ago

So, I have been working on the whole Let's redo the source tree project which I think needs to be done. (I believe it is the most important task which needs to be done right now.)

And with all due respect to all parties involved, I consider the current state extremely chaotic, messy and borderline insane. (I am stating this now so anybody feeling insulted or thinking I am not right can open the discussion and tell me why I am wrong.)

Given the setup presented in previous posts, the new tree should look something like this:

machinekit-hal/
-- src/
---- executables/
------ ...
---- libraries/
------ library/
-------- include/
---------- header.h
-------- config/
---------- config_file_needed
-------- documentation/
---------- documentation_and_explaining_file_which_is_now_just_sitting_in_the_folder
---------- example
-------- source.c
-------- CMakeLists.txt
---- modules/
------ managed/
-------- components/
---------- and2v2/
------------ ...
---------- component/
------------ include/
-------------- header.h
------------ documentation/
-------------- example_hal_file
------------ source.c
------------ CMakeLists.txt
-------- drivers/
---------- driver/
------------ include/
-------------- header.h
------------ documentation/
-------------- example_hal_file
------------ source.c
------------ CMakeLists.txt
---------- driver_icomp/
------------ source.icomp
------------ CMakeLists.txt
------ unmanaged/
-------- module/
---------- include/
------------ header.h
---------- documentation/
------------ example_hal_file
---------- source.c
---------- CMakeLists.txt
-------- module_py/
---------- setup.py
---------- documentation/
------------ example_hal_file
---------- source.py
---------- CMakeLists.txt

(And so on.)

It is basically workspace approach. Flattened in regards with current system. The idea is that even though more folders under one root, it will still help newcomers to better understand what is available and how all comes together.

(And the workspace model already has a precedent in Machinekit-HAL: machinetalk/proto [I know it's a special case ;-)].)

Everything will be accessed through CMake targets (shared libraries for modules, shared libraries and archive libraries for the normal libraries, interface libraries for headers which need to be for some reasons separate) with specified PRIVATE and PUBLIC headers. I would like to avoid object library targets. Which would require some separating of code to its own library (as now multiple source.cs are part of multiple ELF binaries).


There is couple questions I have (or more things I would like to point out so somebody can protest/present his opinion):

  1. There is a lot of "documentation" files and tutorials and such strewn through the codebase. These can be put into the machinekit-docs repository or left there in documentation/ folders. I think these should be left in the codebase as a developer commentary for other developers. However, tutorials should be put into the machinekit-docs repository.

  2. Modules of V1 version which do have V2 version should be removed to lighten the codebase. I know the V2 API is not finished. But, let's be honest, at this point it is not going to be finished in one go. So I think the better solution is to keep developing the V2 in place.

  3. What to do with nanopb? Machinekit-HAL has in its codebase some old version with some patching done. This is quite unendurable in the long term.

  4. There is a lot of flotsam in the codebase. Many of these files are not compiled today and there are no rules for them. I don't think anything which has no rules for compilation should be in the post CMake repository. So what to do about it? Make rules for them and regularly compile them, or to delete? These files are (not complete):

    userpci/
    -- firmware.c
    -- device.c
    -- string.c
    shmem/
    -- common.h
    -- shmemtask.c
    -- shmemusr.c
    timer/
    -- timertask.c
    chkenv.c
    test_rtapi_vsnprint.c (<- there is even test for this in runtest test-suite, but no compilation rules)
cerna commented 3 years ago

So, few updates for whoever is watching this:

One of the biggest problems with current Machinekit-HAL build-system is the differentiation between PUBLIC vs PRIVATE headers files. (Think of PUBLIC headers as an interface or an API for usage of the shared library or MODULE library to use in higher level binaries which depends on the library. Whereas PRIVATE header is used only for build inside the unit - here shared library or MODULE library.) Problem is, with how Machinekit-HAL was dumping all header files into one directory 'include' during build, there is no real separation. There is also no distinction between forward declarations, in-lining implementation and standard full-scale implementation in header files. How and if to separate these files is a question for another time. What I think should be solved is into which folders put these headers in the units. Let's say I put them all in 'include' directory, then during in-sourcetree build the depending-on modules will have access to both PUBLIC and PRIVATE headers, as there is no possible separation in GCC/CLANG/C/C++, but out of source build using installed headers will be able to only get the PUBLIC set of headers - and I see it as a possible point of failure. There is also the possibility of two units having two same named headers. Machinekit-HAL usually uses the prefix pattern like hal_*.h, but not always. Other software project (ROS2 for example) use path <package_name>/include/<package_name>/header.h and in source files then #include <package_name>/header.h which works both within in-sourcetree build and when building with installed headers. But you cannot have PUBLIC headers in <package_name>/include/<package_name>/header.h while having PRIVATE headers in <package_name>/include/header.h as you will use just the <package_name>/include folder for header search path. So one needs two distinct folders. The solution I can think of is to use <package_name>/interface/<package_name>/header.h for PUBLIC headers and <package_name>/include/header.h. Of course, it is ugly. And one can have PRIVATE headers in the <package_name>/src folder, but that is pretty ugly too.

Second issue and ugliness is the MODULE library/shared library patter. (part of it was explained in issue #346.) Currently, that is mainly the Runtime (rtapi.so and libhalulapi.so) and HAL (hal_lib.so and libhal.so). They both have PUBLIC interface header files. But both have completely different methods of how to use them. The shared libraries are linked in the higher binary (you can look and see it with ldd). The MODULE libraries cannot be seen in any way, as they are loaded in rtapi_app dynamically with RTLD_GLOBAL. So the "linking" happens at runtime, and you only need the PUBLIC header files. And thus you cannot use target_link_libraries(<target> PRIVATE hal_module), because then you could see the hal_lib.so in ldd output. But you want to use target_link_libraries() as that is part of every basic CMAKE tutorial and I would like to avoid the problem with original Machinekit/LinuxCNC when industry standard nomenclature had _new confusing meanin_g. (Of course you can use generator expressions and get the INCLUDE_DIRECTORIES property from hal_module directly, but this is ugly and complicated for somebody who wants to create a HAL module. Halcompile is crazy clutch which should be avoided at all costs or should at least be only secondary option.) The way of how to solve this (still ugly, but to a lesser degree) is to actually have 3 libraries. One INTERFACE one holding only the PUBLIC headers, one shared library which will transitively publish the PUBLIC headers and one MODULE library which will not publish anything. Everybody who will then want to use symbols from the MODULE library will depend on the INTERFACE library. That way using target_link_libraries() on MODULE library will fail on missing headers and users will immediately know there is some issue. The best option would be to teach the MODULE library to just never link during target_link_libraries(), but that seems impossible at the moment. People seem to want it though.

cerna commented 2 years ago

With the merge of #349 most of the points of this issue have been solved. The single still remaining glaring issue is that the hairy generation of configuration variables for run from INSTALLED and BINARY trees is still required, but this is going to be solved in due time with introduction of libelektra as configuration backend.

I am sure I introduced few atrocities in the process of change, but these will have to be ironed with time and use.

From the top-down viewpoint, this is done.