Open emk opened 8 years ago
[Option 4] Compile libcore with floats and then try to remove them again with LTO. This is hackish, and it requires the developer to leave SSE2 enabled at compilation time, which may allow SSE2-based optimizations to slip in even where f32 and f64 are never mentioned, which will subtly corrupt memory during syscalls and interrupts.
Since LLVM will implement small-ish memcpy
s by going through XMM registers, this is bound to happen. For example: [u64; 2]
copies in release mode. So this option is right out.
I think there's actually a bit of a matrix here which can be helpful when thinking about this, we've got the two vectors of "libcore explicitly uses floating point" and "llvm codegens using floating point". Looking at the possibilities here:
With this in mind, I think it may be better to frame this around "disabling floating point support in generated code" rather than specifically omitting it from libcore itself. For example if we look at the above matrix, if LLVM is allowed to use floating point registers, then there's no reason to omit the support from libcore anyway (modulo the fmod
issue, which I think is somewhat orthogonal to the usability in kernels).
As a result, this may lead itself quite nicely to a non-invasive implementation. For example on intel processors there may be something like #[cfg(target_feature = "sse2")]
which we could use to gate the emission of f32/f64 trait implementations in libcore. To me this makes more sense than "pass a semi-arbitrary cfg flag to libcore and also disable some codegen". I would personally be more amenable to a patch like this to libcore, and note that this also naturally extends itself well I believe to "libcore supports floats if the target does" so weird architectures may be covered by this as well.
@rkruppe I agree completely. If we don't want SSE2 instructions, we should tell LLVM not to generate them. Generating them and then trying to remove them will obviously fail.
@alexcrichton Thank you for clarifying the issues! If I understand it, you're proposing two things here:
libcore
if the target supports them, and excluded if it doesn't.#[cfg(target_feature = "sse2")]
in libcore
.Am I understanding you correctly? If so, I agree that (1) sounds like a perfectly plausible way to address these issues. But if you intended (2) as a literal proposal (and not just an abstract sketch of an implementation), then I'm not convinced it's the right way to go.
The problem with writing something like #[cfg(target_feature = "sse2")]
is that libcore
would need to know about every possible platform, and you'd quickly wind up with something like:
#[cfg(or(target_feature = "sse2", target_feature = "x87", target_feature = "neon",
target_feature = "vfp", target_feature="vfp4", target_feature = "soft_float"))]
...just to cover the Intel and ARM architectures. And depending on how you implemented it, that conditional might have to appear multiple times in libcore
. This seems like it would be both ugly and fragile.
Some possible alternatives might be:
#[cfg(target_feature = "float"))]
…or:
#[cfg(target_float))]
The advantage of these approaches is that libcore
wouldn't need to contain long lists of target-specific features, and the decision-making process could be moved closer to librustc_back/target
, which is in charge of other target-specific properties.
Logically, this information feels like it would be either:
TargetOption
, orlibrustc_back/target
could infer from Target
and TargetOption
, using code that knows how to interpret features like sse2
and neon
.I'd guess that (1) is fairly easy to implement, and it would work well with the target *.json
files. (2) would require adding new Rust code for each architecture, to interpret features
correctly. Either would probably work.
But in the end, I'd be happy to implement just about any of these approaches—whatever works best for you. Like I said, my goal here is to provide a long-term roadmap for safely writing things like kernel modules using libcore
, and I'm happy with anything that gets us there. :-)
@emk you're understanding is spot on, that's precisely what I was thinking. I'd be fine extending the compiler to have a higher level notion of "floating point support" and disabling that means something different on every platform, and adding a particular #[cfg]
for that seems fine to me!
Some prior art here could be the recent addition of the target_vendor
cfg gate. It's both feature gated (e.g. not available on stable by default) but defined in json files as well.
Great, thank you for the pointer to target_vendor
.
Let me sketch out a design to see if I'm in the right ballpark.
Right now, TargetOptions
has a bunch of fields that control specific kinds of code generation features, including disable_redzone
, eliminate_frame_pointer
, is_like_osx
, no_compiler_rt
, no_default_libraries
and allow_asm
.
We could add a has_floating_point
to TargetOptions
, and a #[cfg(target_has_floating_point)]
option behind a feature gate. We could also use better names if anybody wants to propose them. :-) This #[cfg]
could be used to conditionalize f32
and f64
in core
.
This way, we could define a kernel-safe x86 target using something like:
"features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
"has-floating-point": false,
"disable-redzone": true,
I think that this would actually be a fairly small, clean patch. (Alternatively, we could try something more ambitious, where has_floating_point
automatically implied the corresponding features
list, but that would probably require adding another field named something like features_to_disable_floating_point
to TargetOptions
.)
Would this design work as is? If not, how could I improve it? If we can come up with a basically simple and satisfactory design, I'd be happy to try to implement it. Thank you for your feedback!
@emk yeah that all sounds good to me, I'd be fine if disabling floats just implied all the necessary features to pass down to LLVM so they didn't have to be repeated as well.
@alexcrichton Thank you for the feedback!
For a first pass, I'll try to implement has_floating_point
in TargetOptions
. If we want that to automatically disable the corresponding features
, though, we'd probably still need to specify what that means in the target file, at least in the general case:
"features": "...",
"disable-floating-point-features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
"has-floating-point": false,
"disable-redzone": true,
Above, we disable floating point, and then we need explain what that means, so that the compiler can do it for us.
I'm not sure that's really an improvement over:
"features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
"has-floating-point": false,
"disable-redzone": true,
Can anybody think of a better design? I'm definitely open to suggestions here, and I'm sure I don't see all the use cases.
And thank you again for your help refining this design!
Hm yeah that's a good point I guess, the features being passed down do probably need to be generic. It's a little unfortunate that you can still construct an invalid target specification by disabling floating point and not disabling the features, but I guess that's not necessarily the end of the world.
OK, I've planned out a block of time to work on this week (hopefully by midweek-ish, if all goes well).
Ran into this again today :) @emk did you get a chance to work on it at all?
Not yet! I'm bounding between two different (free-time) Rust projects right now, and this affects the other one. You're welcome to steal this out from under me if you wish, or you can prod me to go ahead and finish it up as soon as possible. :-)
Le sam. 12 déc. 2015 à 12:02, Steve Klabnik notifications@github.com a écrit :
Ran into this again today :) @emk https://github.com/emk did you get a chance to work on it at all?
— Reply to this email directly or view it on GitHub https://github.com/rust-lang/rfcs/issues/1364#issuecomment-164168254.
Okay :) It's not mega urgent for me, either, so I might or might not :)
As a workaround, I created https://github.com/phil-opp/nightly-libcore. It includes thepowersgang's libcore patch.
Great idea guys. Though is getting rid of floating point only part of the picture?
Excuse me if I don't have the full perspective, but what I need is to disable (ARM) neon/vfp instructions in bootstrap/exception handler code so that I know that it won't require the fpu to be enabled or the fpu register file to be saved. (llvm-arm seems to like vldr and vstr for multi-word moves).
I would want to link with a core containing the fpu routines, but know that certain sections don't access the fpu registers. If I understand things, the features are defined at the root compilation unit making it hard to set compiler features for a sub-crate or sub-unit?
How would such an option affect the f32
and f64
types in the language? Would any use of these types become an a compile-time error?
How would such an option affect the f32 and f64 types in the language? Would any use of these types become an a compile-time error?
Nope. They'll just not be included in libcore.
You cannot “not” include primitive types. They are a part of the language.
@nagisa Of course not. They're primitive. But you can stop providing an API for them, which is what this RFC suggests.
@Ticki I guess quoting the original question is the best here, since there seems to be some misunderstanding:
How would such an option affect the f32 and f64 types in the language? Would any use of these types become an a compile-time error?
@Amanieu The answers are “in no way” and “no”. You would still be able to use floating point literals using the notations (0.0
, 0.0f32
, 0.0f64
, 0E1
etc) you use today, use the types f32
and f64
(but not necessarily the values of these types) anywhere where the use is allowed today and use your own (possibly, software) implementations of operations on floating point values to do calculations.
Just chiming in to say that this is an issue I'm hitting too.
Whatever we decide as the solution, I think that it should be user-friendly enough that low-level crate developers can also selectively omit floating-point code from their crates. It should also be documented in the Rust Book so that people know about it.
By that, I mean that instead of a large list of targets to omit, we should definitely have a #[cfg(float)]
or similar that people can remember and use easily. I see tons of potential errors and maintenance bugs with having to copy-paste a large attribute every time.
Will it be possible to do things that llvm disallows?
https://llvm.org/bugs/show_bug.cgi?id=25823 http://reviews.llvm.org/rL260828
@petevine Generally, no. LLVM's assertions are there for a reason. It will almost per se lead to strange bugs.
Is there any progress on this? I think a has_floating_point
flag in the target json as proposed by @emk would be the easiest solution. Then we wouldn't need to maintain custom no-float patches anymore.
It would also allow using cargo-sysroot for cross compiling, which makes development for custom targets much easier.
I'm see if I can free up some time again soon to take another look at this.
I had some free time today and started hacking on this. The results are in rust-lang/rust#32651.
Sorry for stealing this, @emk. I hope it's okay with you…
@phil-opp Never any problem at all. :-)
Just chiming in from the sidelines: there exist common CPUs which support single-precision floating point in hardware, but not double, for example often ARM Cortex-4M with fpu enabled. Maybe it make sense to have separate feature flags for f32 and f64…
On the other hand, probably any such platform that LLVM supports will have functional soft-float emulation for doubles, so perhaps distinguishing f64 and f32 support is more trouble than it's worth...
soft-float should work on all targets you just need the +soft-float
target feature to make sure no floating point code (using SSE registers on x86) is generated in addition to using the soft float ABI which you get with -C soft-float
. Then there's no real need to make the floats in core
optional, is there?
(side note: I think the rustc -C help
incorrectly says it will generates software floating point calls, when actually it just means a soft-float ABI is used but it could still generate code using the FPU)
@parched Not all targets support soft-float. In particular, x86_64 doesn't, which results in a LLVM assert if you try to do floating-point operations when the sse
feature is disabled.
@Amanieu I'm not getting any LLVM assertions with x86-64 and -C soft-float -C target-feature=-fxsr,-mmx,-sse,-sse2,+soft-float
, where are you seeing them?
@parched see https://github.com/rust-lang/rust/issues/26449
@Amanieu, yes I saw that, see my comment at the bottom, if you add +soft-float
you don't get the assertion (I tested it)
I think what we should actually do concerning this general issue is:
-C soft-float
. It is useless because you need a new sysroot too when you change the abifloat-abi
option to the target configuration instead.I've just tried the -C soft-float
/ +soft-float
combination and it really works! The libcore
library and the rest of my kernel compile without problems. The generated code contains no SSE instructions [1] and still runs without crashing, even if I leave SSE disabled in the hardware.
Thanks so much for the hint, @parched! I'm strongly in favor of a float-abi
target feature.
[1] objdump
still shows a few movaps
instructions in implementations of core::num::GenericRadix::fmt_int. But I'm not sure if they're actually used.
@phil-opp Did you disable SSE as well (-sse
target feature)? Keep in mind that even with floating-point disabled, the compiler can still vectorize integer operations using SSE instructions.
I use the features proposed by @parched above:
"features": "-mmx,-fxsr,-sse,-sse2,+soft-float",
Ok, I can't reproduce the movaps
instructions. They're gone now…
Ok, I can't reproduce the movaps instructions. They're gone now…
Yes, I was surprised because I'm sure when I did objdump -d libcore*.rlib | grep xmm
I got nothing.
Note also, the -sse2
isn't needed because it depends on sse
Note also, the -sse2 isn't needed because it depends on sse
Thanks! What about -fxsr
? For what do we need it?
From a quick grep of LLVM, it seems all that flag does is enable/disable the FXSAVE and FXRSTOR instructions in the assembler.
Well, I don't think it's actually needs to be disabled, it's for storing and restoring fp registers, so it's never going to be used if mmx and sse are disabled.
In any case the compiler won't be generating that instruction on its own without explicitly calling the intrinsic for it.
Yes, so actually it's probably better not to disable this for kernel development as it could be useful intrinsic when switching user threads.
Disclaimer: I'm not an x86 kernel/OS dev :-)
Can someone confirm that using features: -sse,+soft-float,etc
does everything that x86 kernel devs want i.e. fast context switches (LLVM doesn't use/emit SSE/MMX instructions/registers), core
can be compiled without modification (no need to cfg
away chunks of it) and floating point operations (e.g. x:f32 + y:f32
) lower to software routines (e.g. __addsf3
)? If yes then it sounds like this issue is solved and there's no need to modify the core
crate or do some other language-level change to support this use case. Is my conclusion correct?
I can confirm they lower to software routines as expected and only use general purpose registers . I can't confirm this is enough for the Op, but believe it should be so.
I just tested this on AArch64 and it works fine with "features": "-fp-armv8,-neon"
. Note that you will need to recompile compiler-rt since these options change the ABI (in particular, floating-point values are passed in integer registers).
@parched @Amanieu Thanks for checking!
Note that you will need to recompile compiler-rt since these options change the ABI (in particular, floating-point values are passed in integer registers).
Interesting! Given that we would ultimately like to have Cargo build compiler-rt intrinsics when you build core/std, perhaps we'll have to add a target_float_abi
field to target specifications; that way Cargo can check that field to build compiler-rt with the right float ABI. Or perhaps we'll port compiler-rt intrinsics to Rust before that becomes necessary.
In practice you can probably get away with a standard compiler-rt as long as you don't use any floating point values in your code.
(I was talking to @huonw about embedded Rust the other day, and he suggested I write this up as an RFC issue. I hope this is in the correct place!)
I'm having a ton of fun hacking on kernels in Rust. Rust is a wonderful fit for the problem domain, and the combination of
libcore
and custom JSON--target
specs makes the whole process very ergonomic. But there's one issue that keeps coming up on#rust-osdev
:libcore
requires floating point, but many otherwise reasonable environments place restrictions on floating point use.Existing discussions of this issue can be found here:
rustc
, you break the parts oflibcore
that deal with floats.libcore
without floats, closed without merge. A version of this patch is provided by rust-barebones-kernel, and this patch is frequently recommended on#rust-osdev
.libcore
depends on floating point.#[cfg(float_is_broken)]
. Not sure how relevant this is.Datum 1: Some otherwise reasonable processors do not support floating point
There's always been a market for embedded processors without an FPU. For the most part, these aren't pathologically weird processors. The standard ARM toolchain supports
--fpu=none
. Many of the older and/or lower-end ARM chips lack FPUs. For example, the FPU is optional on the Cortex-M4.Now, I concur (enthusiastically) that not all embedded processors are suitable for Rust. In particular, there are processors where the smallest integer types are
u32
andi32
, makingsizeof(char) == sizeof(uint32_t) == 1
in C, and whereuint8_t
literally does not exist. There were once quite a few CPUs with 36-bit words. I agree that all these CPUs are all fundamentally unsuitable for Rust, because Rust makes the simplifying decision that the basic integer types are 8, 16, 32 and 64 bits wide, to the immense relief of everybody who programs in Rust.But CPUs without floating point are a lot more common than CPUs with weird-sized bytes. And the combination of
rustc
andlibcore
is an otherwise terrific toolchain for writing low-level code for this family of architecture.Datum 2: Linux (and many other kernels) forbid floating point to speed up syscalls and interrupts
Another pattern comes up very often:
write
or another common syscall?These constraints point towards an obvious optimization: If you forbid the use of floating point registers in kernel space, you can handle syscalls and interrupts without having to save the floating point state. This allows you to avoid calling epic instructions like
FXSAVE
every time you enter kernel space. Yup,FXSAVE
stores 512 bytes of data.Because of these considerations, Linux normally avoids floating point in kernel space. But ARM developers trying to speed up task switching may also do something similar. And this is a very practical issue for people who want to write Linux kernel modules in Rust.
(Note that this also means that LLVM can't use SSE2 instructions for optimizing copies, either! So it's not just a matter of avoiding
f32
andf64
; you also need to configure your compiler correctly. This has consequences for how we solve this problem, below.)Possible solutions
Given this background, I'd argue that "
libcore
without floats" is a fairly well-defined and principled concept, and not just, for example, a rare pathological configuration to support one broken vendor.There are several different ways that this might be implemented:
f32
andf64
when buildinglibcore
. This avoids tripping over places where the ABI mandates the use of SSE2 registers for floating point, as in https://github.com/rust-lang/rust/issues/26449. The rust-barebones-kernellibcore_nofp.patch
shows that this is trivially easy to do.f32
andf64
support out oflibcore
and into a higher-level crate. I don't have a good feel for the tradeoffs here—perhaps it would be good to avoid crate proliferation—but this is one possible workaround.x86_64
(https://github.com/rust-lang/rust/issues/26449 again), so it seems like this approach is susceptible to bit rot.libcore
with floats and then try to remove them again with LTO. This is hackish, and it requires the developer to leave SSE2 enabled at compilation time, which may allow SSE2-based optimizations to slip in even wheref32
andf64
are never mentioned, which will subtly corrupt memory during syscalls and interrupts.What I'd like to see is a situation where people can build things like Linux kernel modules, pure-Rust kernels and (hypothetically) Cortex-M4 (etc.) code without needing to patch
libcore
. These all seem like great Rust use cases, and easily disabling floating point is (in several cases) the only missing piece.