cvra / build-system

A custom build system exploiting CMake
0 stars 1 forks source link

Initial impressions #1

Open antoinealb opened 10 years ago

antoinealb commented 10 years ago

First of all thanks for this system, must have been pretty hard to write all this shell code ;)

I do however have some concerns :

  1. Lack of documentation. Examples are a good idea, but a general explanation of what it does would be great. I see git into the mix, does it download dependencies automagically for example ?
  2. Mixing of documentation and code (with "do not edit below" comments). It would be cleaner to have module definitions in a config file (mmm yaml) with build system separate.
  3. I would have preferred a propper template engine (Jinja for example) over sed + bash magic.
  4. No tests ;)

I feel like this is a prototype (a good one), but so far, I think this is not able to handle production use.

pierluca commented 10 years ago

You are correct about it being prototype-ish, even though I'm relatively confident we could depend on it. I'm used to bash and It wasn't particularily hard to write except for dynamically named arrays.

  1. Lack of documentation should be fixed soon.... and yes, it does download dependencies automagically and recursively. Really, just test it with the pid module. it'ssseee biuuuutifuuuul
  2. It is already the case. make.sh defines one or more compilation projects. ---- It can be in a directory on its own. ---- The "do not edit below this line" is just to retrieve automatically the build system. ---- I prefer single-command builds but If you don't need that, you can basically all remove all of that. cvraModuleMake.sh is the equivalent of your yaml file. It defines a module, essentially. ---- More or less equivalent, in spirit, to CMakeLists.txt cvraMake.sh is the build system itself.
  3. it's just sed, and there's no magic involved :-) A full template engine seemed overkill to me.
  4. agreed. might need to refactor for this. I have tested it pretty heavily but as it stands, it's not repeatable.
antoinealb commented 10 years ago

The thing I don't like with using bash for "complex" software is maintenance issue. I am just not able to debug stuff like this :

eval fillArray all_${cfg} \"\${all_${cfg}[@]}\" \"\${${cfg}[@]}\"
eval fillArray all_${cfg}_linklibs \"\${all_${cfg}_linklibs[@]} \${${cfg}_linklibs[@]}\"

Also there is another "do not edit below" in cvraMake.sh, well actually there is two of them.

If you don't mind, I will try to implement it in python so we have a comparison basis ?

pierluca commented 10 years ago

As you wish, nothing ever prevents you from doing extra work :-) If you don't mind though, I'd like to refactor first the current code and clean it up so that you can evaluate on a fair basis if you want to throw it away.

That said, I'd like to point out though that those two lines (+2 others) are basically the only non-trivial bash code in there (used to work around the fact that bash has poor support for dynamically-named arrays)

I chose bash for 4 reasons:

Now, I understand that not all of you might be familiar with some of the more involved aspects of Bash (arrays and dynamically-named variables) but I believe with proper documentation and a modicum of refactoring, this code is super maintainable.

antoinealb commented 10 years ago

Here is my alternative proposal, I was kinda bored this morning : https://github.com/antoinealb/packager-proposal

I personally feel like this is more maintainable and extendable (at least for me). The template system used (jinja) allows to easily modify the system to, for example, output Makefiles for the ARM target. Also I think more people in the club are familiar with Python than bash, but I might be wrong.

The major downsides of my solution is the added dependencies : python3, python-yaml and jinja2 mostly.

edit: this is not able to output a build now, mostly due to the package paths being wrong.

pierluca commented 10 years ago

Beyond the bash/python thing, it's also much less flexible. It doesn't support multiple targets as part of a single build nor the indication of different sources for different targets. It also doesn't allow modules to indicate separately, and depending on the target, which link libraries might be needed.

I'm not arguing about which is better, I'm just saying that it looks simpler because, well, it is simpler. It assumes changes to the build system rather than full configurability at the module-level.

antoinealb commented 10 years ago

I don't understand what you means by multiple target. Like ARM and x86 ?

For the link libraries features, what is the use case ? We will most likely be linking against two different libraries :

But if we need this features, it would be easy to implement it.

It assumes changes to the build system rather than full configurability at the module-level.

This is by design : I prefer central changes to a tool when the need for more features arise rather than config file tweaking. This reduces the opportunities for bugs and generally result in cleaner system design.

antoinealb commented 10 years ago

Updated my prototype. Now you can build any of my forked module with support for dependencies.

pierluca commented 10 years ago

I don't understand what you means by multiple target. Like ARM and x86 ?

Target is the CMake vocabulary. Let's say multiple outputs to the build process. It could be we have 3 test builds, an x86 and an ARM build, or whatever. A static library and an executable, anything along those lines really. Right now there's no support for static executables though, but that's trivial to add.

For the link libraries features, what is the use case ? We will most likely be linking against two different libraries

This build system could be equally easily used for the development on the x86 CPU. Being able to specify on a per-module basis the link libs allows each module to remain ignorant of the others. It also removes the need for the build system to provide defaults.

I prefer central changes to a tool when the need for more features arise rather than config file tweaking.

Yes and no. I agree that config file tweaking isn't the way to go, per se. However, considering how the build system is designed, any new feature could be considered an additional, optional parameter in the make files. That's how I integrated (in very little time) the possibility to retrieve our personal repositories.

Anyways, the build system I suggested has been improved, refactored, tested and documented. Feel free to enjoy its power here: https://github.com/pierluca/build-system-demo/ :-)

antoinealb commented 10 years ago

Well maybe we should really define the scope of the project. My design goals when I coded my own version were

Your implementation seems to have a much broader scope, and I would like to have the opinion of the rest of the team about what they want from the build tool.

Finally, nothing prevents us from having two build systems for a while before picking one. This would allow us to find out more about what we expect from the tools. It will probably also make the build process more stable / documented.

pierluca commented 10 years ago
  • Support for both CMake (X86) and direct Makefile (ARM) output.

Naive question: why couldn't we use CMake for the ARM output? What's different about it?

Your implementation seems to have a much broader scope

Well, I dropped the "efficiency" requirement of build automation systems but aimed for flexibility. As it is, I think the overhead on modules and projects is minimum. At the same time, it should be rather straightforward to add automatic doxygen documentation and annotation of versions.

Currently, we're missing the ability to snapshot in time the combined git repos, in order to say "commit X of pid, using commit Y of platform-abstraction, passes all tests and seems to work". This seriously worries me, because a time will come when we'll be debugging a module, checking out an old version and we simply won't know which version of another module was working with it due to interface changes.

That's something I'd like to fix as soon as possible but I still have to think of an appropriate solution. The basic idea is to keep all the repos as submodules in a "cvra-all" repo, but I still have to think of the best way to trigger the update of this repo.

Finally, nothing prevents us from having two build systems for a while before picking one.

Not sure there's much to gain from this compared to the additional overhead.

I would like to have the opinion of the rest of the team about what they want from the build tool.

I'd like that too.

antoinealb commented 10 years ago

Naive question: why couldn't we use CMake for the ARM output? What's different about it?

Generally you are doing some arcane stuff, like putting together a binary with objcopy, using a linker script to store your data in some precise points and having code that needs to run before main. On the other hand you never use dynamic library discovery or anything like that, so you loose the benefit of CMake and have to really hack into making it work, if doable.

Currently, we're missing the ability to snapshot in time the combined git repos

Agree that it might be a useful feature, I might also implement it on my side to try it. I will probably dump a dictionnary mapping the hash of every module into a yaml file and add another command to restore it.

Not sure there's much to gain from this compared to the additional overhead.

Well we could use it to make an informed decision about which system works best for our needs, since as I said I feel like they cover different needs.

pierluca commented 10 years ago

Generally you are doing some arcane stuff

OK, got the point. Thanks for taking the time to clarify.

Keeping in mind the single-command-build philosophy, I would leave the build process as-is and possibly add an additional script at the end of the make.sh file taking care of the "arcane stuff" :-)

antoinealb commented 10 years ago

For example : https://github.com/antoinealb/tivaware-template/blob/master/Makefile#L104

antoinealb commented 10 years ago

As the flexibility argument was pretty good, I also implemented some sort of flexibility in my system, but using a different approach, based on template inheritance. Basically you define a "root" template which you extend by replacing parts of the content in children templates.

Since I thought it would be easier to understand with a demo, I coded one in https://github.com/antoinealb/packager-app

Observe how the custom Makefile template for that project simply states the part of the root Makefile that it wants to change and leaves the rest "as is". As you said :

it'ssseee biuuuutifuuuul
pierluca commented 10 years ago

By the way, and this is equally interesting for both our solutions, CMake seems quite easy to configure for multiple targets platforms, includding embedded. One can simply specify a separate "toolchain file" that defines all the parameters and objcopy can be added as a custom command: http://stackoverflow.com/questions/5278444/adding-a-custom-command-with-the-file-name-as-a-target

It would be necessary to have two build directories though, as the toolchain applies to the entire cmake file. It would be called with something along those lines

pushd build-myProject/x86/
cmake -DCMAKE_TOOLCHAIN_FILE=~/x86.cmake ..
make
popd
pushd build-myProject/ARM/
cmake -DCMAKE_TOOLCHAIN_FILE=~/arm.cmake ..
make
antoinealb commented 10 years ago

I don't know if CMake is really useful when developping on microcontrollers. On Linux, it is really the way to go with library discovery and stuff. But on a MCU, I feel like the flexibility of a Makefile is more interesting. Every embedded project I saw was using a Makefile, mostly because it is easy to add specific steps to build.

But we can try.

antoinealb commented 10 years ago

I have started to work on a version freezer for dependencies : https://github.com/antoinealb/packager-proposal/blob/master/freezer.py

Load of specific versions is not implemented yet but should be easy to do. I separated it from the packager script so we can use it with both build systems. It works with Python 2.7 and 3.x, and doesn't rely on anything not in standard library (except for Git of course), so there is no dependencies issues ;)