Open orf opened 2 weeks ago
cc @gaogaotiantian
I don't believe this is a pdb
issue, it's more about how Python should behave in a certain scenario. I don't think this is trivial (minor) and I think there should be a discussion. We should not change the default behavior without many people supporting it. I don't see an obvious advantage for the alternative behavior, even though I can understand that some people might prefer it.
The problem is, when the env var is not set, and we fall back to pdb, the users that thought they've set the env var might wonder - why my env var did not work? Having the exact same behavior for an empty env var and a wrong env var (or unimportable one) may nor may not be the better choice. However, that's definitely breaking the current behavior, so I'm leaning towards not adding it.
A new env var is better on the compatibility side, but a new env var is also not nothing. I think we are trying not to explode the env var space.
Therefore, all of this goes back to - how many people need this? And that's why I would encourage a discussion in d.p.o.
It's also easy to unset it for any particular run, e.g. PYTHONBREAKPOINT= runmystuff
That said, I'm not sure the current behavior is very useful. At the very least, we should make this change for 3.14.
The problem is, when the env var is not set, and we fall back to pdb, the users that thought they've set the env var might wonder - why my env var did not work?
I understand your arguments for keeping the current behaviour, but I find the current inverse behaviour of “why did my breakpoint not work” to be more unexpected, confusing and limiting.
If a user, like myself, is in the situation to be customising their environment variables then I would expect them to be able to understand the warning and make the change.
My primary motivation for this is just to have a “set and forget” default in my shell for all my projects, and it’s not a major issue if this is decided against.
However, if a user isn’t that familiar with env vars and/or has one implicitly set somewhere (via a forgotten shell config, dotenv file, docker container etc) it, then the current behaviour of breakpoints to just not working is much more inscrutable and annoying even with the warning.
I do think it’s a sensible default to add and a simple thing to change: a breakage would need to implicitly rely on a debugging interface not functioning with an invalid path. Is this behaviour that should be relied upon or made an invariant?
I think it would be acceptable to emit a warning and fall back to pdb
- we should at least let user know that we tried to import whatever they asked, but failed. Silently falling back to pdb
feels a bit weird to me.
A warning should be emitted, for sure. We could even emit it when the breakpoint is hit and the fallback to pdb occurs rather than at the start of the process, which should be much more visibile?
We are emitting the warning when the breakpoint is hit now I think.
Feature or enhancement
Proposal:
I'd like to use
ipdb
rather than the built-inpdb
by default for my breakpoints. However, if I setPYTHONBREAKPOINT="ipdb.set_trace"
as a global shell environment variable, and the project does not haveipdb
available, then all breakpoints are disabled:IMO this isn't very user-friendly and can be quite annoying: I need to remember to export it on a per-project or per-shell basis. It would be great to fall back to the standard
pdb
breakpoint handler if the import fails.If this is a compatability issue, perhaps a new
PYTHONDEFAULTBREAKPOINT
env variable could be added (and that can be set globally), which is used if available and falls back topdb
if not?Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
Links to previous discussion of this feature:
No response