Closed bradjc closed 1 year ago
Look, I can run multiple libtock-rs apps:
$ tockloader install console.tab leds.tab
tock$ Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAF0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
Hello world!
tock$ list
PID ShortID Name Quanta Syscalls Restarts Grants State
0 Unique leds 0 336 0 1/15 Yielded
1 Unique console 0 10 0 0/15 Terminated
tock$
I like this design, in that it much more closely mirrors how we build apps in libtock-c
and allows for a similar experience of building apps and flashing them on a board. Obviously as-is this PR breaks all of the other make
rules as they exist today since they do not pass linker addresses. Is your plan if Johnathan etc. like this design to adapt the other rules to pass the right addresses based on the board name, but require using make tab
if users want to flash multiple apps? Or would the plan be to remove the other rules and only endorse flashing tabs using tockloader directly?
Is your plan if Johnathan etc. like this design to adapt the other rules to pass the right addresses based on the board name, but require using
make tab
if users want to flash multiple apps? Or would the plan be to remove the other rules and only endorse flashing tabs using tockloader directly?
I think it would be to basically undo the changes to build.rs and allow for either the current version or this version.
How does this not race if you try to build apps at different locations simultaneously? I.e. #366
How does this not race if you try to build apps at different locations simultaneously? I.e. #366
I don't see anything happening simultaneously. My make tab EXAMPLE=console
just runs cargo several times in a row.
How does this not race if you try to build apps at different locations simultaneously? I.e. #366
I don't see anything happening simultaneously. My
make tab EXAMPLE=console
just runs cargo several times in a row.
An external build system (i.e. that of a libtock-rs
user) may make multiple cargo
invocations concurrently, such as to build the app for at multiple locations in parallel. That shouldn't race. libtock-rs
currently has a bug when you try to build an app for multiple boards simultaneously, and I suspect this PR makes it race if you build an app at multiple locations in parallel.
This might be a way to use different linker scripts for each flash/ram location: https://doc.rust-lang.org/cargo/reference/build-scripts.html#cargorustc-link-argflag
Hah—I can verify that flag works as-expected as I was just using it to debug linker issues over in the kernel :)
Agree it's likely a better way to go. We have a standing TODO in the kernel makefile to move linker options out into carog/rust land once it becomes reasonable / sane — looks like that time has come.
I added a new version in make. Maybe it is better? Seems easier to add new flash/ram addresses.
I did try it with make elfs EXAMPLE=leds -j4
and it worked fine, although it doesn't save time because cargo serializes anyway.
Same makefile, this time new build.rs. Turns out my previous change didn't actually work, and cargo:rustc-link-arg=
only works for the crate the build.rs file is in. So the first change is to move build.rs to the root so it applies to all crates.
Then, we no longer need to be making new copies of layout.ld and can instead just point the linker to what script to use and where it is. So we do that.
Same makefile, this time new build.rs. Turns out my previous change didn't actually work, and
cargo:rustc-link-arg=
only works for the crate the build.rs file is in. So the first change is to move build.rs to the root so it applies to all crates.
This breaks any app that depends on libtock_runtime
but not libtock
, such as Ti50's apps. This is exactly why I didn't use cargo:rustc-link-arg=
to specify the linker script before. If we want to do this, we'll need to create a new crate such as libtock_runtime_build
to provide this functionality, so that apps do not need to copy build.rs
from libtock_runtime
in order to function.
And wait... so if you build a library's examples cargo
will listen to cargo:rustc-link-arg=
, but if you build a bin
crate that depends on the library cargo
will ignore cargo:rustc-link-arg=
? So examples can do things that external crates using the library cannot? That seems unreasonable.
If so, we should move the apps out of examples/
and into some other location (bin/
?) where cargo
treats them equally with external crates.
This breaks any app that depends on
libtock_runtime
but notlibtock
, such as Ti50's apps. This is exactly why I didn't usecargo:rustc-link-arg=
to specify the linker script before. If we want to do this, we'll need to create a new crate such aslibtock_runtime_build
to provide this functionality, so that apps do not need to copybuild.rs
fromlibtock_runtime
in order to function.
How do these apps get a linker file? They set no_auto_layout
and have a strategically placed layout.ld file?
so if you build a library's examples
cargo
will listen tocargo:rustc-link-arg=
, but if you build abin
crate that depends on the librarycargo
will ignorecargo:rustc-link-arg=
?
Yes. I'm pretty sure. If I understand correctly this is intended behavior the rust team voted on https://github.com/rust-lang/cargo/issues/9554#issuecomment-886947409
How do these apps get a linker file? They set
no_auto_layout
and have a strategically placed layout.ld file?
In retrospect, Ti50 a poor choice of example, because they do specify no_auto_layout
.
However, the way an app would normally get a linker script is:
-C link-arg=-Tlayout.ld
in their RUSTFLAGS (via .cargo/config
or another mechanism)libtock_runtime
's build.rs
copies layout.ld
into cargo's OUT_DIR
.I deleted a comment of mine and GitHub closed the PR?!
However, the way an app would normally get a linker script is:
1. Specify `-C link-arg=-Tlayout.ld` in their RUSTFLAGS (via `.cargo/config` or another mechanism) 2. `libtock_runtime`'s `build.rs` copies `layout.ld` into cargo's `OUT_DIR`.
While I do like the elegance of this approach and how it works with cargo and build.rs, do you actually want apps to do this? The linker files are named based on platform names, which could be confusing if I just want the settings but am not, say, a microbit. But more importantly, changes to the kernel can mean the linker scripts upstream don't work, and then what to do? Can't send a PR because others may depend on that specific linker script.
We ask all tock kernel boards to include a build.rs, why not libtock-rs apps?
Ok, but you might ask: well, brad, if downstream apps want to build for all platforms, as this pr proposes, how are they going to do that?!? And that...is a good question.
After looking through https://github.com/rust-lang/cargo/issues/9554, I think the correct answer is:
libtock_runtime_build
that exports the auto_layout
function.libtock_runtime_build
via [build-dependencies]
, and have a build.rs
script that calls libtock_runtime_build::auto_layout
.no_auto_layout
feature flag from libtock_runtime
, as making its invocation manual makes the feature flag unnecessary.I'm still trying to figure out whether we still have a race condition with this implementation -- possibly not? If so, this probably gives us a way to solve it, as we can change the name of the linker script per-location.
Would the ld files move to that crate too?
Would the ld files move to that crate too?
I think so.
Ok, are we getting closer? Further away? Who knows!
Anyway, there is now a build crate which provides a helper function that crates building libtock apps (such as the libtock crate itself) can use. Its lib.rs provides a helper function that chooses which linker script to use, or to create a new one and use that. Its build.rs file copies all of the linker scripts to its out dir, and then shares the out dir path via environment variables with the lib.rs file, which can then use it in the app's root main.rs. Is this ok? I'm not really sure.
This means we no longer need any build.rs in the runtime crate.
I did some make hacking and updated the makefile. Adding a new flash/ram target is now just adding one line like:
$(call fixed-target, F=0x40440000 R=0x3fcaa000 T=riscv32imc-unknown-none-elf A=riscv32imc)
and it will automatically get built and included in the tab. I think this is something reasonable that external libtock rs apps could do.
We may still want to split out the makefile to make it easier to include/copy, but that seems like something to do when we actually have an out-of-tree libtockrs to experiment with.
I want to update the top level readme once we decide what we want to do.
The switch to the new build crate also speeds up builds from where I started because we no longer have to re-build the runtime crate for each ram/flash/arch tuple. Only the libtock crate has to be built every time.
Should we call the new build crate libtock_build
? It's useful when building any sort of libtock app. What should the folder name be in this repo?
Should we call the new build crate
libtock_build
? It's useful when building any sort of libtock app. What should the folder name be in this repo?
Good question, I've been keeping the naming conventions in my head. I wrote them down and sent them as PR #496.
At the moment, all its functionality is specific to libtock_runtime
(which is why I originally suggested libtock_runtime_build
), but I don't see a good reason for why that should remain the case. Therefore, I think libtock_build
is a fine name for the Cargo package.
The natural directory name to go with libtock_build
is just build
... but that conflicts with a common convention of naming build directories build
. Perhaps build_scripts
would be a better directory name? No strong opinion from me, I'm just listing a couple of okay options.
I like build_scripts
(and libtock_build_scripts
). It make it more clear that these are helpers for building apps.
Ok I just went through the changes again and this is back to a place where I'm happy with it.
It seems that if cargo re-runs build.rs then it also unconditionally compiles/links the crate. I was thinking that if build.rs didn't actually change any files then maybe cargo would run the build script and then see that no files changed and so it doesn't need to re-run the crate build. However, I couldn't get that to work.
Not a huge deal, I guess, it's just too bad that running make tab EXAMPLE=X
twice in a row rebuilds the libtock crate for each flash/ram/target tuple when it doesn't need to.
Rather than have build.rs use an existing linker file, this uses environment variables in the build.rs file to generate a linker script for building the app.
Building for a particular flash/ram address looks like:
Then I have a quick rule in
make tab
to generate an elf at a couple locations and then create a TAB with elf2tab.RFC
Thoughts?
This is useful for creating libtock-rs apps at multiple addresses so multiple apps can be loaded.