Open Centril opened 5 years ago
To the extend that dbg!
is “just” sugar for eprintln!
, should the latter also be supported in const contexts?
eprintln!
would be surprising I think, since it's not immediately meant as a debugging tool. Changing its behaviour in such a big way (not printing at runtime anymore) seems like it would warrant a lot more discussion.
So, does this proposal involve making dbg!
not print at runtime anymore?
(depending on the answer to @SimonSapin's question -- if the answer is "yes" then the following is moot):
cc @eternaleye -- Do you think there's any problem wrt. soundness of [u8; dyn bar(n)]
being the same type as [u8; dyn bar(n)]
for two separate invocations of bar(X)
given const fn bar
if we let dbg!(x)
work in const fn
? My worry is that dbg!(x)
would always work at CTFE but that it may non-deterministically panic at run-time based on whether STDERR exists or not...
So, does this proposal involve making dbg! not print at runtime anymore?
Depends on how you're using it. Consider
const fn foo(u: u32) -> u32 { dbg!(u * 2) }
If you call that function in runtime code
fn main() {
foo(42);
}
you get a runtime print of 84
.
but if you call that function in a const context
type Foo = [u8; foo(42)];
then you get a compile-time print of the sort
note: `dbg! print encountered`
type Foo = [u8; foo(42)];
^^^^^^^ `dbg!` print: `84`
@centril Yeah, the usual framing of debug logging to make it pure is that it's a noop or identity function that happens to be a convenient thing for a debugger or runtime to instrument, along the lines of fn debug<T>(x: T, msg: &'static str) -> T { x }
.
Here, dbg!()
is explicitly side-effecting - it writes to stderr, advancing the write cursor, etc. It both acts on the environment (an effect) and relies on the environment (a coeffect).
@eternaleye Bummer, so I believe we have 3 options then?
dbg!(x)
inside a const fn
such that it cannot output anything at run-time.bikeshed!(x)
such that it behaves as in 1. but dbg!(x)
cannot be used in const fn
s.const fn
during run-time. (but this is a bad idea..)My preference would be towards (2) where bikeshed!(x)
prints a compile-time message.
EDIT: To expand, the idea that a const fn
has any effect at all at runtime is weird and confusing to me. Especially in no_std
situations, it's not clear when any runtime effects should happen: at initialization? at any place where a const
produced from the const fn
would be used (what about constant propagation?)?
Even supposing that there was a theoretically correct way to have a const fn
with runtime effects, that behavior doesn't seem obvious to me in any way. And my guess is that it could be a stumbling block to others too.
I don't know what 3. is, but I think we should have dbg!
as I described above. Unchanged runtime behavior. At compile-time, you get a compile-time diagnostic instead.
The documentation of dbg!
explicitly says
The exact output printed by this macro should not be relied upon and is subject to future changes.
and
The macro works by using the Debug implementation of the type of the given expression to print the value to stderr along with the source location of the macro invocation as well as the source code of the expression.
It does not talk about how it actually ends up getting that information to stderr. The dbg!
macro can be seen as the "warning" variant of panic
at compile-time, which also moves a runtime-side-effect to a compile-time-side effect.
I'm imagining the following table as the four things we want to do, and they would be perfectly covered by just panic!
and dbg!
, without the need for mutually exclusive debugging macros (one for runtime and one for compiletime).
dbg! | panic! | |
---|---|---|
run-time | abort/unwind | |
compile-time | warn | error |
Even supposing that there was a theoretically correct way to have a const fn with runtime effects, that behavior doesn't seem obvious to me in any way. And my guess is that it could be a stumbling block to others too.
As stated above, we already have such an effect, and an accepted RFC for it. I don't see how debugging is any different. I understand that there's a purity-theoretical point speaking against it, but purity of const fn is already not happening at runtime the moment you allow generic const fn with trait bounds. At run-time those trait bounds will not require const trait impls, so the actual trait impls can do anything they can at runtime without restrictions.
the idea that a const fn has any effect at all at runtime is weird and confusing to me. Especially in no_std situations, it's not clear when any runtime effects should happen: at initialization? at any place where a const produced from the const fn would be used (what about constant propagation?)?
The printing will either happen at compile-time (if you used the const fn
in a constant), or at runtime, e.g. if you just called it in your main function.
For no_std situations it's not an issue, because dbg
is in std
, and not in core
.
The documentation of
dbg!
explicitly says
I don't think the issue is changing dbg!
; I think the issue is permitting a monomorphic (and using only impl const Trait
s where substitution is needed) const fn
to not behave in a deterministic fashion at run-time.
As stated above, we already have such an effect, and an accepted RFC for it.
There's no problem with panics from const fn
s at run-time; they occur in a deterministic fashion.
I understand that there's a purity-theoretical point speaking against it, but purity of const fn is already not happening at runtime the moment you allow generic const fn with trait bounds. At run-time those trait bounds will not require const trait impls, so the actual trait impls can do anything they can at runtime without restrictions.
Yes but you would require that when [T; dyn f(n)]
is applied twice, you can only assume f(n) == f(n)
if and only if f
is a const fn
where all type variables are substituted for types where their traits satisfy the impl const
variant. However, if all such const fn
s may sneak non-determinism in via dbg!
then you can no longer soundly assume this.
There's no problem with panics from const fns at run-time; they occur in a deterministic fashion.
Can you elaborate on the non-determinism that you refer to? Is it about the Debug
implementation potentially being nondeterministic? Because const fn
will still require const Debug
inside that function. Of course if you pass in a &dyn Debug
or impl Debug
, those aren't required to have const Debug
impls at runtime, but that's exactly what the impl const Trait
RFC allows.
Of course if you pass in a
&dyn Debug
orimpl Debug
, those aren't required to haveconst Debug
impls at runtime, but that's exactly what theimpl const Trait
RFC allows.
Wait what? The RFC says nothing about allowing that...
Is it about the
Debug
implementation potentially being nondeterministic? Becauseconst fn
will still requireconst Debug
inside that function.
That's not the issue for the reason you say.
Can you elaborate on the non-determinism that you refer to?
When you say eprintln!("[{}:{}] {} = {:#?}", file!(), line!(), stringify!($val), &tmp);
you are printing to the standard error. This will eventually end up in:
#[unstable(feature = "print_internals",
reason = "implementation detail which may disappear or be replaced at any time",
issue = "0")]
#[doc(hidden)]
#[cfg(not(test))]
pub fn _eprint(args: fmt::Arguments) {
print_to(args, &LOCAL_STDERR, stderr, "stderr");
}
/// Write `args` to output stream `local_s` if possible, `global_s`
/// otherwise. `label` identifies the stream in a panic message.
///
/// This function is used to print error messages, so it takes extra
/// care to avoid causing a panic when `local_stream` is unusable.
/// For instance, if the TLS key for the local stream is
/// already destroyed, or if the local stream is locked by another
/// thread, it will just fall back to the global stream.
///
/// However, if the actual I/O causes an error, this function does panic.
fn print_to<T>(
args: fmt::Arguments,
local_s: &'static LocalKey<RefCell<Option<Box<dyn Write+Send>>>>,
global_s: fn() -> T,
label: &str,
)
where
T: Write,
{
let result = local_s.try_with(|s| {
if let Ok(mut borrowed) = s.try_borrow_mut() {
if let Some(w) = borrowed.as_mut() {
return w.write_fmt(args);
}
}
global_s().write_fmt(args)
}).unwrap_or_else(|_| {
global_s().write_fmt(args)
});
if let Err(e) = result {
panic!("failed printing to {}: {}", label, e);
}
}
Now someone may either overwrite LOCAL_STDERR
(not on stable tho), or there may simply not exist a global_s ==> stderr
. In that case print_to
will panic. Thus we may non-deterministically have panics. However, at compile-time there will be no panic.
Wait what? The RFC says nothing about allowing that...
https://github.com/rust-lang/rfcs/pull/2632/files#diff-93d98aeffe5c71a78e51f4c4dbdd1a4bR198
As an example:
trait Foo {
fn foo(&self);
}
struct A;
struct B;
impl const Foo for A {
fn foo(&self) {}
}
impl Foo for B {
fn foo(&self) {
// Open file handles and network handles and compute some random numbers
}
}
const fn bar<T: Foo>(t: &T) {
t.foo();
}
const X: () = bar(&B); // not legal
const Y: () = bar(&A); // legal
fn main() {
bar(&B); // legal
bar(&A); // legal
}
You can extend this to &dyn Foo
arguments without generics and the same rules and logic applies.
Now someone may either overwrite LOCAL_STDERR (not on stable tho), or there may simply not exist a global_s ==> stderr. In that case print_to will panic. Thus we may non-deterministically have panics. However, at compile-time there will be no panic.
So the nondeterminism that can happen is that stderr
is broken or user-defined? I personally don't see why that would be a problem (see also my argument for const fn
at runtime being able to call arbitrary non-const-fn code via their arguments).
As an example:
Oh that's fine... that's just the essence of effect polymorphism...
However:
const fn bar() { B.foo(); } // Nope.
or with extensions:
const fn bar<T: const Foo>(t: &T) { t.foo(); }
fn main() { bar(&B) } // Nope.
So the nondeterminism that can happen is that
stderr
is broken or user-defined?
Yep; especially without involving any unsafe { .. }
. I think if you could get it to abort rather than panic it might not affect soundness but all of this needs more careful thought and I think I'd need to consult with some type theorists (e.g. at my university).
I hit this today! I wanted to debug a run-time call of a function that also happened to also be called at compile-time, and got an error message about function pointers not being allowed in const fn
. I assume this refers to some detail of the fmt machinery.
In this case, having dbg!
be allowed in a const fn
would have been useful even if it’s (for now) a no-op at compile-time.
Implementing a nop-dbg!
would require https://github.com/rust-lang/rust/pull/64683/files#r336732380 to be implemented first
I'd love a dbg! that worked in no_std contexts as well (but perhaps that should have a different tracking issue...)
Where would the output go? By definition a no_std
context does not have standard-library-known global standard output and standard error streams.
dbg!
calls eprintln!
but outside of that it’s pretty simple. If you have something like serial_port_println!
you can copy-paste dbg!
and modify it to call that instead.
I get that, but panic prints out to the screen in a no_std context so there's room for some quality of life special cases. If there's really no std out and I'm run inside a keyboard then I totally get that dbg!() can be a no-op (or could goto a hook that people provide like global alloc?), but if I'm running no_std tests for example it's kinda annoying that I have to import the log crate to print some debug. I'd like my no_std cake and be able to see lines of dbg too (where appropriate). no_std devs like nice things too.
I get that, but panic prints out to the screen in a no_std context
Does it really? To what screen?
Or do you mean compile-time panics in const
contexts? (Since that’s what this issue is about.) Because for run-time panics, a no_std
must define a #[panic_handler]
function. If there’s a screen that you want panic info printed to, this handler is where you write code to make it happen. There might or might not be a screen or serial port or debugger, but the core
crate can make no assumptions about how to talk to them.
If you do mean run-time panics, yes discussing that somewhere else would be more appropriate.
We can’t add compile-time-only no_std
support for dbg!
since a const fn
function can also be called with non-constant parameters at run-time.
I guess rust’s test runner must add it’s own panic handler and that’s how it writes to std out. If we had a dbg_handler fn then libtest could implement that. But you’re right I can’t see that that would be relevant to a const eval dbg().
On Sun, 11 Jul 2021 at 13:33, Simon Sapin @.***> wrote:
I get that, but panic prints out to the screen in a no_std context
Does it really? To what screen?
Or do you mean compile-time panics in const contexts? (Since that’s what this issue is about.) Because for run-time panics, a no_std must define a
[panic_handler] function
https://doc.rust-lang.org/nomicon/panic-handler.html. If there’s a screen that you want panic info printed to, this handler is where you write code to make it happen. There might or might not be a screen or serial port or debugger, but the core crate can make no assumptions about how to talk to them.
If you do mean run-time panics, yes discussing that somewhere else would be more appropriate.
We can’t add compile-time-only no_std support for dbg! since a const fn function can also be called with non-constant parameters at run-time.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/rust-lang/rust/issues/58278#issuecomment-877791963, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGEJCBCFAWWYXHHBGJHNWLTXGFSLANCNFSM4GVMBYLQ .
The test
crate depends on std
:
So the test runner is not a no_std
application, even if some of the crates involved (like the one being tested) are no_std
. If std
is anywhere in an application’s dependency graph then its #[panic_handler]
is used and you don’t need to (and can’t) define another one.
Firstly thank you for your patience!
I think what I should be arguing for (assuming this isn’t already the case) is for there to be a #[cfgif not(feature=“no_std”)] in the definition of dbg! so that they don’t do anything if compiled in a no_std context but if the code is compiled in a std context then they do the standard print.
But this relies on there being a std or no_std feature defined which alas are not defined. (I would honestly love for std or no_std to be defined as a feature- it would make things simpler).
On Sun, 11 Jul 2021 at 17:41, Simon Sapin @.***> wrote:
The test crate depends on std:
So the test runner is not a no_std application, even if some of the crates involved (like the one being tested) are no_std. If std is anywhere in the dependency graph then its #[panic_handler] https://github.com/rust-lang/rust/blob/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs#L441 is used and you don’t need to (and can’t) define another one.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/rust-lang/rust/issues/58278#issuecomment-877829954, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGEJCCPE5VNWWVUFGAMIMTTXHCR3ANCNFSM4GVMBYLQ .
So, I thought of a way this could work and started writing up an RFC, but then realised that most of this is kind of out of my league since I don't know nearly as much about compiler internals.
Essentially, the idea is we'd add separate dbgprint!
and dbgprintln!
macros which expand to:
eprint!
or eprintln!
in std-enabled, runtime contextsconst_dbgprint
intrinsic in const contexts, which is then consumed by miri to collect debug output per constant being evaluated.The definition of dbg!
can be modified to use dbgprintln!
instead, but we'd also offer these macros to the user directly.
One big benefit of these is it'd help with tooling to let people search for explicit cases of debug printing without having to use the extra stuff printed by dbg!
, but it could also maybe longer-term serve as a replacement for trace_macros
as well, since you could potentially surround the entire macro expansion inside a constant to allow dbg!
printing there too.
Could also help replace existing proposals like debug_warn!
and compile_warning!
too, if you just output a constant with your message:
const FYI: () = {
if !cfg!(debug_assertions) {
dbgprintln!("This code is terrible and probably shouldn't be run in production.");
}
};
Like I said, I have absolutely not enough compiler internals knowledge to actually write an RFC for this, but would be willing to elaborate more on the specifics for anyone who does want to write an RFC. This feels like the only appropriate way to make dbg!
work in const contexts tbqh.
We can't change macro expansion depending on whether we are in a const context or not. I'd also prefer to generate the same MIR in constcontexts and outside, as oyjeteise it gets very hard to do stability checking for const things (we may accidentally stabilize things or allow things that are semver hazards).
Isn't that how panics work today, though? Or am I misunderstanding how panics are done?
panics in CTFE use the same machinery as panics at runtime. CTFE just sees the panic functions and runs its own logic at that point. The panic functions are basically intrinsics
(Tagging as "design concerns" only because it sounds like this still needs some design work, not just implementation work.)
panics in CTFE use the same machinery as panics at runtime. CTFE just sees the panic functions and runs its own logic at that point. The panic functions are basically intrinsics
There's no reason that the internal _eprint
function in the std library couldn't be treated in a similar way, where there is a const and non-const version of the same function.
Maybe we wouldn't want to hook _eprint
itself, but a more specialized _dbgprint
function or something, but my point is that there is no technical reason this can't be done.
We should make
dbg!(expr)
work inconst fn
by using somelang_item
that is on the body of a function that contains the contents ofdbg!(...)
. We also need to make it work with inlining.This sorta needs
impl const Debug
to work, but in the meantime we can just dump the miri state something something, @oli-obk can fill in...