modm-io / modm

modm: a C++23 library generator for AVR and ARM Cortex-M devices
https://modm.io
Mozilla Public License 2.0
719 stars 127 forks source link

I get linking error. How to organize files for multiple targets? #392

Closed mfp20 closed 4 years ago

mfp20 commented 4 years ago

Hi,

I placed project.xml and main.cpp in the project dir, so far so good: I can build modm library and then the elf example using scons.

Then I added a linux subdir and an avr subdir for the two targets, and a third common subdir for common files. In each of those files there's a xml, I include common.xml in both linux.xml and avr.xml, then I include linux.xml in project.xml to build the linux hosted target, or avr.xml to build the avr target. So far so good: the hello-world project is built for both targets depending on which target I 'extended' in project.xml.

As soon as I add common/class.h, linux/class.cpp, avr/class.cpp ... I get a linking error because both the linux and avr versions of the same class get built. And I can't figure out the right filesystem layout to build for multiple targets or a way to customize the SConstruct file generation. What am I missing?

salkinium commented 4 years ago

You would have two project.xml files in separate directories each with their own generated modm folder and file system. You're currently overwriting all the modm files when you change target, which isn't great. It's also difficult to share a main.cpp file between linux and AVR, since the setup is a little different (for example, Linux does not need Board::initialize() etc).

This is the recommended folder structure: Put all your common code in a top-level src/ folder, and use a SConscript to manage that code via SCons:

 $ tree
.
├── src
│   ├── common
│   │   ├── class.hpp             <- common header
│   │   ├── avr/class.cpp         <- AVR implementation
│   │   ├── avr/class_impl.hpp   <- AVR header implementation
│   │   └── hosted/class.cpp      <- Hosted-Linux implementation
│   │   └── hosted/class_impl.hpp <- Hosted-Linux implementation
│   └── SConscript                <- custom SConscript
└── app
    ├── avr
    │   ├── project.xml <- AVR specific lbuild config
    │   ├── SConstruct  <- custom SConstruct includes ../../src/SConscript
    │   └── modm        <- generated only for AVR
    └── hosted
        ├── project.xml <- Linux specific lbuild config
        ├── SConstruct  <- custom SConstruct includes ../../src/SConscript
        └── modm        <- generated only for Hosted-Linux

You need a custom SConstruct but only with a tiny modification, so generate the target specific SConstruct once, then set the modm:build:scons:include_sconstruct option to false, so that your modifications are not overwritten.

The only change you need to make is add "../../src" to generated_paths, this will

generated_paths = ["modm", "../../src"] # <<<<
# used here to find your custom SConscript file
env.SConscript(dirs=generated_paths, exports="env")

The SConscript file is completely custom, you can do whatever SCons supports. You can take a look at the modm/SConscript file, for the most common use-cases. I would do something like this:

Import("env")
# set the include path to <common/class.hpp>
env.Append(CPPPATH=".")

if env["CONFIG_DEVICE_NAME"].startswith("hosted"):
    files = env.FindSourceFiles("hosted")
else:
    files = env.FindSourceFiles("avr")

# build a static library and link it
library = env.StaticLibrary(target="common", source=files)
env.AppendUnique(LIBS=library)
env.AppendUnique(LIBPATH=str(library[0].get_dir()))

Return("library")

You don't need to do the static library, but otherwise you would have to return the sources to the SConstruct file and add it to the sources variable, which is a bit less flexible.

I haven't tested this, so you may need to tinker with this a little.

salkinium commented 4 years ago

Btw, if you want to use the code generation features of lbuild in your own library or just want to modularize your library a bit with some lbuild options, then you can build your own lbuild repo, like the Invensense repo does: https://github.com/modm-io/Invensense-eMD

In that case, the SCons generator automatically generates the build system for you, so you don't need the custom SCons* files. However, it really depends on how much you need it, I'd stick with the SConscript for now and later if you find a use-case for lbuild you can convert your repo to use lbuild too.

mfp20 commented 4 years ago

wow, awesome. Thanks for the quick aid. I give it a go and report back what I got.

A small feedback: probably an example dedicated to show this setup would be helpful; I mean, it's a multi-target tool, so having a multi-target app should be a breeze. I understand the beauty of modm is the amazing job of assembly a purposed C++ lib to keep it fast despite the C++ overhead. But from there to configure the app itself, is a bit hard to understand what to do; imho. In my case, at least. This was the first time for me to hear about scons. Things like linux kernel config or Buildroot (ex: openwrt, esp32 sdk, and plenty more projects) present the same ncurses interface for that purpose. Your setup is pretty superior thanks to python and scons ... Kconfig+Autoconf+Automake is a nightmare in comparison... but looks like there's one last step missing, to make it perfect.

Back on topic: the mcu firmware is just half the project; another half is a python daemon running on the host to command the mcu firmware. This daemon needs both a C helper lib to tap into the mcu firmware (common command api), and a cythonized build for performance reasons. So, scons need to manage all this different builds from the root folder; because of this, I need to write all the SCons* in each folder. Probably a custom lbuild repo would make the whole thing more comfortable. But looks too complicate at the moment; I'm still evaluating whether to use modm or not. Don't get me wrong, I'm investing my time to adopt modm ... being able to have an efficient C++ lib is simply wonderful. But need to dig a bit more in the code to understand whether performance are enough or not. The current C code works and it is fast enough, but very messy and hard for third parties to tap into development; that's why I'm rewriting everything. My hope is that modm makes the magic, while keeping the performance.

Off-topic: probably I should open a new issue for this but it's not urgent, and don't want to abuse (more) of your time. Please, don't write a solution, just point to docs (if any): how to add new mcu support? At a later stage I need to add LPC176x, SAMDx, ESP32, and BCM283x support.

rleh commented 4 years ago

probably an example dedicated to show this [multi target] setup would be helpful

I personally have never had such a use case before, a separate main.cpp with main() for each target was always necessary. Reusable code should generally be stored in a library (or similar) anyway (if possible). If you have a good example, we would be happy if you open a pull-request!

rleh commented 4 years ago

Off-topic:

This daemon needs both a C helper lib to tap into the mcu firmware (common command api)

Why is C needed here? Nearly all possible interfaces from a microcontroller to a PC (USB, Serial, Ethernet, CAN, SPI, I²C, ...) can be used directly from python...

mfp20 commented 4 years ago

My current (simplified) tree. Need to add SCons* now.

$ tree
.
├── src
│   ├── common
│   │   ├── avr
│   │   │   └── storage.cpp
│   │   ├── linux
│   │   │   └── storage.cpp
│   │   └── storage.h
│   ├── host
│   │   ├── chelper
│   │   │   ├── c_helper.so
│   │   │   ├── pyhelper.c
│   │   │   ├── pyhelper.h
│   │   │   ├── serialqueue.c
│   │   │   └── serialqueue.h
│   │   └── main.py
│   ├── mcu
│   │   ├── avr
│   │   │   ├── main.cpp
│   │   │   └── project.xml
│   │   ├── common.xml
│   │   └── linux
│   │       ├── main.cpp
│   │       └── project.xml
│   └── modm (modm root)
├── start.app.sh
├── start.avrsim.sh
└── test

note: the modm root folder is the git submodule.

rleh commented 4 years ago

At a later stage I need to add LPC176x, SAMDx, ESP32, and BCM283x support.

There is some abandoned LPC support in modm, but currently deactivated. Currently @CrustyAuklet and @salkinium are workign on Atmel SAM support, see #194.

Support for ESP32 and BCM283x (Raspberry Pi?!) is probably more difficult, since they are not Cortex-M based. I once did some experiments on the Xilinx Zynq (FPGA with dual Cortex-A9) with xpcc. I you use the Raspberry Pi with an operation system (Linux) modm works out of the box using the target hosted-linux.

mfp20 commented 4 years ago

Off-topic:

This daemon needs both a C helper lib to tap into the mcu firmware (common command api)

Why is C needed here? Nearly all possible interfaces from a microcontroller to a PC (USB, Serial, Ethernet, CAN, SPI, I²C, ...) can be used directly from python...

Can be slowly used directly from python, or made it faster using C externs. Current python app relies on the chelper for fast communication with the mcu. This thing might change tough, once I evaluate the new code speed.

rleh commented 4 years ago

Please, don't write a solution, just point to docs (if any): how to add new mcu support?

xpcc (predecessor of modm) had a porting guide. I don't know what happened to the document: https://github.com/roboterclubaachen/xpcc/blob/develop/PORTING.md

mfp20 commented 4 years ago

I'd like to add support a wide range of boards already sold and in use. That's why the need of LPC and SAM mcus.

Support for ESP32 and BCM283x (Raspberry Pi?!) is probably more difficult, since they are not Cortex-M based. I once did some experiments on the Xilinx Zynq (FPGA with dual Cortex-A9) with xpcc. I you use the Raspberry Pi with an operation system (Linux) modm works out of the box using the target hosted-linux.

I have some ESP32 and I'm not satisfied both by ESP32 sdk (bloated, cumbersome, convoluted) and by Micropython (it takes over all the mcu and kills the performance). That's why adding support to modm could be the way...

Finally, I've an old idea to use an Rpi0 (or other similar boards) as a realtime microcontroller (with gpio pins) for some cpu intensive tasks. But I don't know if one of those new little thingies (ESP32, STM32) can match the bcm mcu power on the rpi0. Further investigation is needed. In any case Linux has historical limits for realtime applications. Despite the huge improvements in the linux kernel since the old rtlinux, getting rid of linux is the best option still. AFAIK.

Please, don't write a solution, just point to docs (if any): how to add new mcu support?

xpcc (predecessor of modm) had a porting guide. I don't know what happened to the document: https://github.com/roboterclubaachen/xpcc/blob/develop/PORTING.md

Awesome, thanks! At first glance, doesn't look impossible. Those mcus have all the needed info in the public domain already. Some stuff for the broadcom GPU core might be missing but ... the general purpose cores could be ported. Last check I did, was last year, and the 3-4 developers working on gpgpu on rpi just dropped the project because of broadcom closed-ness (and availability of more open companies, like Mediatek).

salkinium commented 4 years ago

Your setup is pretty superior thanks to python and scons ... Kconfig+Autoconf+Automake is a nightmare in comparison...

Thanks! I too was burned by other config tools, plus proper code generation tools are difficult to come by. However, to get to this point took me and @dergraaf about 5 years to build/improve lbuild and create modm-devices and port xpcc to modm.

But from there to configure the app itself, is a bit hard to understand what to do; imho.

I know, it's on my endless TODO list… Now that lbuild is "done" I can finally start working again on the actual C++ API. 🙈

I mean, it's a multi-target tool, so having a multi-target app should be a breeze.

I think its technically doable, but probably not very useful. Our microcontroller environment is very lean, so large parts of the libc and libc++ are simply not implemented. Things like libc fopen or libc++ std::thread require ports, and so what works on hosted will not work on avr. Similarly, you don't (typically) have GPIO or any other native peripherals on Hosted, and if you do have them (like RPi), you'd access them via a kernel API.

Using the same code for Hosted and Microcontroller is thus very difficult. It's a easy to use the same code for AVR and Cortex-M, because that's pretty much the point of modm.

But need to dig a bit more in the code to understand whether performance are enough or not.

There's no slow-down for using C++ instead of C in modm, we don't use exceptions or RTTI, and the rest is pretty much just syntactic sugar, or not more expensive as in C.

But, to convince yourself, scons listing does a disassembly interlaced with source code. I use it a lot to judge the code size and speed of modm. You can use modm::PreciseClock for a microsecond clock for on-device benchmarking, you need to depend on the :platform:clock module (see also :architecture:clock).

I need to add LPC176x, SAMDx, ESP32, and BCM283x support.

SAMD support is progressed the furthest, it's already integrated into modm-devices, the CMSIS headers are maintained here and the port can compile and program a simple blinky example (but it's missing most of the actual HAL API).

LPC can be added in a similar way to SAMD, all Cortex-M devices are fairly easy to add now, but you may need to manually write a own device file, if you don't find a machine-readable data source. There is an old PR for the LPC11C24 see #233.

ESP32 support would require significant effort to port the functionality inside at least the :platform:cortex-m, :platform:clock and :build modules. Since there aren't many ESP32 devices, the advantage of the code generation via modm-devices will likely not offset the effort.

Baremetal Cortex-A is very difficult, mostly due to the sheer complexity of dealing with the entire SoC. It's would be nice to have, but pretty much out-of-scope of modm and using a real-time Linux distro is a better way to go.

xpcc (predecessor of modm) had a porting guide. I don't know what happened to the document

Someone™ kept changing things and thus someone™ didn't want to keep this up-to-date. Someone™ put rewriting the porting guide on their endless TODO list… 🙉 lalalalalala

salkinium commented 4 years ago

So, scons need to manage all this different builds from the root folder; because of this, I need to write all the SCons* in each folder. Probably a custom lbuild repo would make the whole thing more comfortable.

Yes, that's where you need to add your own SCons code. Only Google can help you now, muhoha!

lbuild won't help either, since it only manages the whole modular code generation stuff, but it doesn't know what it's generating, it just passes all of this data along to the build script generator, which actually converts this data to SCons and/or CMake.

mfp20 commented 4 years ago

@salkinium and @rleh thank you both, a lot. Very kind. I think you both gave me enough information to get started; basically using modm let me do all I need:

From my perspective the issue is solved. Thanks.

TL;TR

The app I'm porting to modm is just a very simple 'realtime actuator' for robotics. All it does is receive pre-scheduled events to be run in the future, and report back whatever needs to report back (ie: time sync beats, adc values, digital pins states, alarms and error conditions). On boot it has to report the hw model and its available resources (ie: hw timers, pins, busses, and so on), then receive the configuration from the host, then wait for timers and tasks to be registered and run At The Right Time. I don't need nanosecond precision, so I'm trying to do it in software, instead of adding an external temperature-compensated clock source, shielded cabling and so on... can't afford that added hw complexity. The only extra job could be manage locally some error events for disaster avoiding. I hope to be able to develop some good protothreads able to keep the events buffer full while executing the current event; on AVR it is scary stuff...

The only thing left bugging me is ROS. I'd like to use it to send point-to-point messages using I2C and SPI too; currently it supports UARTs only. This forces me to have many-to-host messaging only, ie: to send a message from mcu1 to mcu2 I need to proxy on host (ie: added latency).

I've thousands of doubts and questions in my mind at the moment, but writing and reading comments would probably be more time consuming than just ... do, try, re-try. Currently I need to write some code and experiment, to have better understanding of this new tool.

salkinium commented 4 years ago

From my perspective the issue is solved. Thanks.

Ok, feel free to add more comments to this thread, we'll get notified for closed issues too.

mfp20 commented 4 years ago

The gcc issue (ie: lack of avr-gcc v7+ in main distros) make distribution of modm-based source code pretty useless, as users can't easily build themselves. I'm using the v9.2 linked on your install guide, but it's just a workaround because isn't integrated in the distro package manager, forced into a fixed location to find the right binutiles/libc, need to mangle the path env var (or alike)... and instructing users to replicate the same setup is a loose game. It's a potential barrier that cuts out 90% of the userbase.

Probably I should build myself the modm static library, and distribute the binary libmodm.a. Then maintain updates in time. Is there a quick command to include all the options and drivers in a static library and build 1 file for each and all the available targets?

salkinium commented 4 years ago

It's a potential barrier that cuts out 90% of the userbase.

I know, this is such a major pain for AVR targets that I'm not recommending AVR for new designs. The OSS GCC is also lacking support for a lot of AVR targets, Atmel never upstreamed their and now it's too late. GCC 10 is deprecating the AVR backend and removing it in ~2021, and ARM is favoring LLVM over GCC for new Cortex-M features.

So for C++20 we may need to switch to LLVM for both AVR and Cortex-M, but I'm still waiting on avr-llvm to become more stable, and for LLD to support all the linkerscript syntax we need.

Is there a quick command to include all the options and drivers in a static library and build 1 file for each and all the available targets?

You can do that by adding an SCons alias to depend only on the libary (scons libmodm):

env.Alias("libmodm", library)

I would assume that inside the SConstruct the call to env.SConscript(paths) returns a list of os the individual SConscript return values (they should all return the libraries).

But the header files all use C++17, so you still need an up-to-date avr-gcc to access the libmodm. I'm also not sure if the static library format is stable across GCC versions.

You can only really distribute the final binary.

salkinium commented 4 years ago

The Microchip takeover of Atmel and the new cross-over AVRs they are making also made me write off AVRs completely, and AVR support will be dropped in modm as soon as it becomes too much work to maintain.

salkinium commented 4 years ago

forced into a fixed location to find the right binutiles/libc

Yeah, this is a known issue I haven't found a solution for. The GCC build system is just horrible.

mfp20 commented 4 years ago

It's a potential barrier that cuts out 90% of the userbase.

GCC 10 is deprecating the AVR backend and removing it in ~2021

WTF!!! I've spent 2 months of my life digging in gcc/llvm/qemu to learn about their semantic engines (ie: the code translator from frontend to IR and from IR to assembly). GCC was the worst ever code I've ever seen. The real developer's nightmare. I've lost 50% of my hair and 50% of my sight in those 2 months. So ... I understand why they are dropping old archs to make it easier to maintain. But AVR isn't. I mean... they've been keeping EVERYTHING IN for 20+ years, then within 5 years they dropped 30-50% of the supported archs. That's a shame. AVR is not a dead arch, it's in use, attiny (called avr2.5 arch in gcc, if I remember well) is the cheapest mcu currently on the market... in some cases costs less than NE555! There are BILLIONS of AVR alive around the world. Can't be dropped like this... I wonder if Alpha is still supported, instead... WTF

So for C++20 we may need to switch to LLVM for both AVR and Cortex-M, but I'm still waiting on avr-llvm to become more stable, and for LLD to support all the linkerscript syntax we need.

I'm sad everytime I must think of LLVM. But it works. Make me a favor: don't go to C++20, stick to C++17 until at least a compiler supports all the archs you support. I mean: do you really need so advanced syntax? As I said, I'm not a good developer; so I might not understand the need of advanced syntax introduced by later standards but ... considering the compilers turmoil and C standard committee pushings... things are getting pretty tricky to make them up and running.

Is there a quick command to include all the options and drivers in a static library and build 1 file for each and all the available targets?

You can only really distribute the final binary.

Ayay sir. :(

salkinium commented 4 years ago

don't go to C++20, stick to C++17 until at least a compiler supports all the archs you support.

Don't worry, a lot of modm users know where I live ;-P

do you really need so advanced syntax?

The C++20 coroutines is what I'm really after, as a replacement for Protothreads and Resumable functions, which both have massive issues with local variables. That requires actual compiler tech, so it's not just better syntax.

things are getting pretty tricky to make them up and running.

I feel you. It's also not just the language, but also the C++ standard library, which is getting bigger and bigger. I wish it was more modularized. The standards committee is aware of it, and the "freestanding" and "exceptions light" proposal is what I need, but it's all moving so, so, so slowly.

Rust is getting pretty attractive, not just for the language but also for the environment (ie. cargo). The momentum is there too, with major companies choosing Rust over C++. See blog.japaric.io and Embedded Rust. A lot of similarity with modm regarding modularity, features etc.

mfp20 commented 4 years ago

The Microchip takeover of Atmel and the new cross-over AVRs they are making also made me write off AVRs completely, and AVR support will be dropped in modm as soon as it becomes too much work to maintain.

I've just bought a dozen new AVR based, damned cheap, boards for this project. But can't blame you.

Looks like the whole world is dropping AVR, Atmel/Microchip in primis. An atmega2560 cost 10$, a SAM3X8E cost 5$. An attiny85 cost 1$, a STM8S001J3 0.20$. And software is even worse: the internet is full of obsolete docs; simulavr is a ghost... and now gcc dropping avr is the killing strike. The only drawback is having different families, different archs, instead of a single instruction set. Royalties kill the chance to have a reference arch, in a similar way x86 has been for bigger machines. The Arduino thing itself, retrospectively looks like some guys squeezing the avr lemon. And it's a pity, because Arduino (as well as PIC) has been a great learning tool. Took the young people closer to electronics.

As it gets harder and harder to find docs and tools, I'll have to revisit my personal hw stock ruleset and find another goto arch for small mcus. It looks like Cortex-M is the new frontier.

mfp20 commented 4 years ago

do you really need so advanced syntax?

The C++20 coroutines is what I'm really after, as a replacement for Protothreads and Resumable functions, which both have massive issues with local variables. That requires actual compiler tech, so it's not just better syntax.

Right, I forgot about the modern tendancy to include in the stdlib things that used to live in boost or somewhere else... Standardized coroutines are a good thing. Protothreads, greenlets, tasks... every fricking developer on earth had to develop something similar and give it a new buzz-name. It's always the same stuff but it takes us some time, every time, depending on the env we are working in: vxworks, freertos, java, C++, python ... whole lives simply wasted learning the same thing 1000 times ...

Rust is getting pretty attractive,

Eheh, I'd rather spend my time to develop further my Tintin++ client.

I usually send to /dev/null all the "new languages" news. After the 4th or 5th call, go to have a look "just in case (it is really real, not just a falling star)". I just did it for Rust and I smell something good, but too early for me to adopt (it) and adapt (myself to it). I have seen developers crying (literally) for the time they have invested in learning new stuff, and then see that stuff being early dropped by its vendor (ex: altivec instructions in Gx processors, adobe flash, and so on). I got screwed with Perl and PHP; I keep those books on my shelves right under my eyes next to the monitor, so I remember. After that, I skipped a lot of technologies, and most of those are defunct now. Others instead, like Python, are finally worth using (despite its GIL). So, no, Rust isn't really attractive (yet); but I'd be happy in 2030 to adopt and adapt.

In the while time, c++ protothreads are enough.

mfp20 commented 4 years ago

BTW, I've found this build script for gcc-avr 9.2. It worked on my machine. It may be useful to complete the docs giving a tip to let users easily build their own buildchain.

salkinium commented 4 years ago

I've found this build script for gcc-avr 9.2.

Oh, please tell me it makes avr-gcc relocatable?

mfp20 commented 4 years ago

You're asking too much: shame on you! :D

But you can choose where to locate it, and leave it there. /work/gcc-avr is good for me, but could be annoying for some picky sys admins.

mfp20 commented 4 years ago

I'm having problems to nest SCons* in my project:

ImportError: cannot import name 'avrdude':
...
  File "/home/user/project/src/mcu/atmega1284p/modm/scons/site_tools/avrdude.py", line 15:
    from modm_tools import avrdude

It looks like python's path is screwed. My SConstruct in project/

#!/usr/bin/env python3

import os
from os.path import join, abspath

# SCons environment with all tools
env = DefaultEnvironment(tools=[], ENV=os.environ)
env.project_name = "project"
env.build_path = abspath("build/")

# Building libraries: modm 
libs = ['src/mcu/']
library = SConscript(dirs=libs, exports='env')

print(library)

And current (simplified) tree at project/src/mcu/

$ tree
.
├── atmega1284p
│   ├── main.cpp
│   ├── modm
│   │   ├── modm_tools
│   │   │   └── avrdude.py
│   │   ├── scons
│   │   │   └── site_tools
│   │   │       └── avrdude.py
│   │   ├── SConscript
│   │   └── src
│   ├── project.xml
│   └── project.xml.log
├── atmega2560
├── atmega328p
├── linux
└── SConscript

The inner SConscript (inside modm/ of each target) is pristine from lbuild. The outern one instead, is the first part of SConstruct from lbuild, modified to become a SConscript that builds all targets (note: currently just modm for each target):

#!/usr/bin/env python3

import os,sys
from os.path import join, abspath

Import('env')
localenv = env.Clone()

localenv["CONFIG_BUILD_BASE"] = abspath(localenv.build_path)
localenv["CONFIG_ARTIFACT_PATH"] = join(localenv["CONFIG_BUILD_BASE"], "artifact")
localenv["CONFIG_PROJECT_NAME"] = localenv.project_name
localenv["CONFIG_PROFILE"] = ARGUMENTS.get("profile", "release")

# Building all libraries
libs = ['linux/modm']
for d in next(os.walk('.'))[1]:
    libs.append(d+'/modm')
library = SConscript(dirs=libs, exports={'env':localenv})

Return('library')

I clone the env to keep a pristine one for later use (ie: after I have collected modm static library, 1 for each target), then pass localenv to the pristine lbuilt SConscript in the target folder in order to build modm. And the toolpath is screwed: avrdude.py in site_tools (ie: scons default path) can't import avrdude.py in modm_tools. I saw the init.py, and I played a bit with paths, but I can't understand what's going on inside scons. Why to drop your own scripts in modm_tools instead of using the site_tools folder?

Any clue?

salkinium commented 4 years ago

Why to drop your own scripts in modm_tools instead of using the site_tools folder?

The goal was to have stand-alone Python3 tools that are then only wrapped in the build system either SCons or (C)Make. That way adding a new build system to modm becomes easier, or just integrating these tools into something more custom is possible without having to deal with the weirdness of SCons. See #370.

The syspath only gets set in the modm/SConscript (at the very top), but env.Dir("#") is the local working directory, which would be the top SConscript, this should probably just say sys.path.append("modm"). I don't know what I was thinking when I did it like that…

salkinium commented 4 years ago

Actually it should be sys.path.append(".") since the modm/SConscript is already in modm/. 🤦

mfp20 commented 4 years ago

sys.path doesn't help. I've already tried before writing.

edit: tried to change it into "." but doesn't help... avrdude.py can't be imported anyway.

edit2:

    env.Tool("avrdude")
  File "/usr/lib/scons/SCons/Environment.py", line 1788:
    tool = SCons.Tool.Tool(tool, toolpath, **kw)
  File "/usr/lib/scons/SCons/Tool/__init__.py", line 118:
    module = self._tool_module()
  File "/usr/lib/scons/SCons/Tool/__init__.py", line 234:
    module = spec.loader.load_module(spec.name)
  File "<frozen importlib._bootstrap_external>", line 399:

  File "<frozen importlib._bootstrap_external>", line 823:

  File "<frozen importlib._bootstrap_external>", line 682:

  File "<frozen importlib._bootstrap>", line 265:

  File "<frozen importlib._bootstrap>", line 684:

  File "<frozen importlib._bootstrap>", line 665:

  File "<frozen importlib._bootstrap_external>", line 678:

  File "<frozen importlib._bootstrap>", line 219:

  File "/home/user/project/src/mcu/atmega1284p/modm/scons/site_tools/avrdude.py", line 15:
    from modm_tools import avrdude

I tried to print spec.name from /usr/lib/scons/SCons/Tool/init.py and both running my modified SConstruct or the pristine lbuilt one, and the output is the same, it's just a string 'avrdude'. But the pristine works, mine doesn't. It's nasty.

edit3: this is 'spec' (in both cases)

ModuleSpec(name='avrdude', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7082c24cc588>, origin='/home/user/quattro/src/mcu/atmega1284p/modm/scons/site_tools/avrdude.py')

So, it's something inside the init . py file. I suppose.

salkinium commented 4 years ago

Doesn't this add conflicting paths to sys.path? libs.append(d+'/modm') Maybe that's what's confusing it.

mfp20 commented 4 years ago

You nailed it down. Not a bug. I mean, I'm the bug.

The outern SConscript adds src/mcu/atmega1284p/modm at sys.path[0], then the inner one adds src/mcu/linux/modm/ext/dlr/scons/site_tools and 3 more. So the outern is adding paths for 'atmega1284p' target, the inner one instead is adding paths for 'linux' target. Weird.

salkinium commented 4 years ago

Still, I found a few path problems while moving SConstruct around, but nothing related to modm_tools. Quite the opposite, I removed sys.path.append, and it kept working. It seems that SCons automatically adds the directory of every SConscript to the path.

Btw, modm/SConscripts will definitely overwrite global env variables, so adding all of them will do something weird…

salkinium commented 4 years ago

See #398. Also fixed the delay template issue, since you didn't open a PR yet.

mfp20 commented 4 years ago

Dude, my PR has been sent hours ago.

salkinium commented 4 years ago

Oh, maybe GitHub is broken? It's not listed here: https://github.com/modm-io/modm/pulls

mfp20 commented 4 years ago

crap, my fork is even with modm.io develop branch. The only option I've in mind: it didn't commit my pr because I didn't fill any commit message, and I didn't wait for the ack after pressing the commit button. I suppose.

mfp20 commented 4 years ago

I corrected my bug and removed the stray path.append, but the bug persist. It's not a path problem, as I said.

Current outer SConscript

#!/usr/bin/env python3

import os,sys
from os.path import join, abspath

Import('env')

# Building all libraries
library = []
for d in next(os.walk('.'))[1]:
    l = d+'/modm'
    localenv = env.Clone()
    localenv["CONFIG_BUILD_BASE"] = abspath(localenv.build_path)
    localenv["CONFIG_ARTIFACT_PATH"] = join(localenv["CONFIG_BUILD_BASE"], "artifact")
    localenv["CONFIG_PROJECT_NAME"] = localenv.project_name
    localenv["CONFIG_PROFILE"] = ARGUMENTS.get("profile", "release")
    library = library + SConscript(dirs=[l], exports={'env':localenv})

Return('library')
mfp20 commented 4 years ago

I printed the path both at the inner SConscript (path[0] only) and the site_tools/avrdude.py

$ scons
scons: Reading SConscript files ...
INNER: /home/user/quattro/src/mcu/atmega1284p/modm
SITE_TOOLS ['/home/user/quattro/src/mcu/atmega1284p/modm/ext/dlr/scons/site_tools', '/home/user/quattro/src/mcu/atmega1284p/modm/scons/site_tools', '/home/user/quattro/src/mcu/atmega1284p/modm', '/home/user/quattro/src/mcu', '/home/user/quattro', '/usr/bin/scons-local-3.0.1', '/usr/bin/scons-local', '/usr/lib/scons-3.0.1',

In site_tools/avrdude.py

print("SITE_TOOLS "+str(sys.path))
from modm_tools import avrdude

modm_tools should be found thanks to sys.path[2]: /home/user/quattro/src/mcu/atmega1284p/modm

and it doesn't. Looks like site_tools/avrdude.py is ignoring modmtools, or the init _.py in modm_tools is bugged.

mfp20 commented 4 years ago

This is the working one, using the pristine lbuilded SConstruct

$ scons -f SConstruct.atmega1284p.lbuilded 
scons: Reading SConscript files ...
INNER: /home/user/quattro/src/mcu/atmega1284p/modm
SITE_TOOLS ['/home/user/quattro/src/mcu/atmega1284p/modm/ext/dlr/scons/site_tools', '/home/user/quattro/src/mcu/atmega1284p/modm/scons/site_tools', '/home/user/quattro/src/mcu/atmega1284p/modm', '/home/user/quattro/src/mcu/atmega1284p', '/usr/bin/scons-local-3.0.1', '/usr/bin/scons-local', '/usr/lib/scons-3.0.1',

They are identical. I also printed the executable to be sure I was using the same python env, cause I don't use the system one. And they are.

mfp20 commented 4 years ago

It's really weird. The path doesn't get updated. The following in site_tools/avrdude.py:

p = sys.path[2]
sys.path.append(p)
print("SITE_TOOLS "+str(sys.path))
from modm_tools import avrdude

doesn't update sys.path. Tried the same in modm/SConscript and doesn't update sys.path too. Looks like scons is freezing the sys.path; is it multithreading?!?

mfp20 commented 4 years ago

init .py in modm_tools isn't guilty. It doesn't get called. It does its job using the working SConstruct from lbuild instead.

mfp20 commented 4 years ago

It's just wrong to manipulate sys.path. Because the path persist across iterations on different targets, as far as it's the same python instance. No matter what I do to switch from one target to another: the sys.path var in site_tools/avrdude.py is always the same (same memory location, so its value gets new appends for every target). You must find a different way to get modm_tools from site_tools.

SCons is made to get such results without going out of its env. If you go mangling the sys env (that contains SCons envs) ... you screw up. And I'm screwed as a consequence: did you tell me your address? :D