Closed vstinner closed 2 months ago
I removed the check since it never failed
My concern is that this ties the API to current platforms, with currently common settings. For example, I expect that sandboxting will be more common on Wasm.
As I said elsewhere, if an API could fail in the future, it should have a way to report failure. Error-checking is unpleasant, but it's part of using a C API.
Clock functions like Windows GetTickCount64()
used by time.monotonic()
and gettimeofday()
used by time.time()
simply cannot fail. The strange thing is that time()
can fail according to its manual page, whereas gettimeofday()
cannot :-) Nowadays, Python tries first to call clock_gettime()
, but then have multiple fallbacks if it fails.
Error-checking is unpleasant, but it's part of using a C API.
My point is more that if there is a corner case where reading clock can fail, returning 0 and ignoring error is an acceptable trade-off. The program is unlikely to run normally if all clocks are blocked by a sandbox, other things will fail anyway. And for me, it's more the responsibility of the user to fix their setup, than Python to report errors in such case.
For example, I expect that sandboxting will be more common on Wasm.
Do you have a concrete example? Or is it a theorical concern?
Well, I guess we disagree. I think that API functions should be able to report failure. Exceptions to that should be very, very rare (and if they do happen I'd prefer adding an additional variant without error-checking.)
Bothering all users to have to check for errors just for that would be overkill.
As a code reviewer, seeing a Py*
API call without an error check is a red flag; for functions that don't need this I need to check the docs (unless it's very common, like an incref). This bothers me more than having to write a few extra lines of error-checking.
The C API is designed to be convenient to use, not to be "perfect" (report unlikely error).
If we're OK with imperfect API, let's bring back _PyTime_t
&co.
If we -- as a group -- believe that its biggest problem is the leading underscore, then I vote for not removing it.
Python has around 34 calls to _PyTime_GetMonotonicClock(), _PyTime_GetPerfCounter() and _PyTime_GetSystemClock() functions which cannot report errors. It also has around 35 calls _PyDeadline_Init() and _PyDeadline_Get() functions which cannot report errors: wrapper on top of _PyTime_GetMonotonicClock() to implement a "deadline".
I would prefer to have not modify this code just for a theorical issue, or at least "unlikely" case.
GetTickCount64() cannot fail, but convertion from millisecond to nanosecond (x 1_000_000) can overflow when we are getting closer to _PyTime_MAX (in year 2262: so in 238 years).
macOS: mach_absolute_time(), mach_timebase_info() cannot fail. Same here, Python can report an error if mach_timebase_info() returns (0, 0), but it would be a macOS bug and most programs would be affected by that.
HP-UX: gethrtime() can fail according its manual page, but the manual page example doesn't check for error. I'm not sure why an error case is documented. Can it really happen in practice?
Solaris: clock_gettime(CLOCK_HIGHRES) can fail. I never fail any of such failure.
Unix: clock_gettime(CLOCK_MONOTONIC) can fail. I never fail any of such failure.
Windows: QueryPerformanceFrequency(), QueryPerformanceCounter(): cannot fail. Python can report an error if QueryPerformanceFrequency() return 0 but that would be a big bug in Windows affecting most programs!
Otherwise: call _PyTime_GetMonotonicClock().
@gvanrossum @iritkatriel @zooba: So, what do you think about these APIs?
This bothers me more than having to write a few extra lines of error-checking.
See how these functions are used. There are cases where errors cannot be reported, such as PyThread_acquire_lock_timed().
I think that API functions should be able to report failure. Exceptions to that should be very, very rare (and if they do happen I'd prefer adding an additional variant without error-checking.)
Python is using these APIs for around 10 years without any issue. It seems like Cython is also using them. We cannot get any issue about them. So I'm confused on why they should now be changed.
Python has around 34 calls to _PyTime_GetMonotonicClock(), _PyTime_GetPerfCounter() and _PyTime_GetSystemClock() functions which cannot report errors. It also has around 35 calls _PyDeadline_Init() and _PyDeadline_Get() functions which cannot report errors: wrapper on top of _PyTime_GetMonotonicClock() to implement a "deadline".
But there's no need for these to use the public API. Is there? We can adjust these calls any time we want, if we need to port to a platform without a good clock. We can't quite do that for public API.
Other programming languages
So I'm confused on why they should now be changed.
We are now exposing them as supported public API, rather than just-good-enough internal helpers. That is what removing the underscore means. We need to do API design now.
And to return to a question i omitted:
Do you have a concrete example? Or is it a theorical concern?
You've provided an example yourself:
Over 10 years, I saw a single failure in a custom sandbox which blocked syscalls to read time. It was a single user on a very specific issue, and it was an issue in the sandbox config, not in Python.
Python has a mechanism for reporting rare issues like a misconfigured environment: raising an exception.
But there's no need for these to use the public API. Is there?
See the rationale of my first message: Cython uses this API.
We are now exposing them as supported public API, rather than just-good-enough internal helpers. That is what removing the underscore means. We need to do API design now.
Cython uses the current private API.
We can adjust these calls any time we want, if we need to port to a platform without a good clock. We can't quite do that for public API.
Last time I checked, Hurd has no support for a monotonic clock. Python doesn't support Hurd. If someone wants to run Python on Hurd and there is no CLOCK_MONOTONIC support, just use CLOCK_REALTIME and cross fingers :-)
C gettimeofday() can fail. When it does, it returns -1 and should set errno.
I used some shortcuts: py_get_system_clock()
pass NULL as the second argument. The function cannot fail with EINVAL. Other errors are about settimeofday().
C time() can fail. When it does, it returns -1 and should set errno.
Oh sorry, I read the wrong manual page. Python doesn't call time()
anymore.
Rust std::time::*::now() cannot return an error, but it may panic.
The documentation says "Note: mathematical operations like add
may panic if the underlying structure cannot represent the new point in time." But I didn't see anything about now() triggering a panic. I suppose that yes, it can trigger a panic. I just didn't see any clear mention in the doc.
Also, one alternative is to call Py_FatalError() if a clock fails. I hate this function, since it kills the process. It's really bad when Python is embedded. But it's a tradeoff for things which "must not happen".
I think the discussion is starting to go in circles.
I fail to see how Cython is related to _PyTime_GetMonotonicClock()
, _PyTime_GetPerfCounter()
, _PyTime_GetSystemClock()
, _PyDeadline_Init()
, _PyDeadline_Get()
, and _PyTime_GetMonotonicClock()
. Cython's use of the time API looks trivally portable to fallible functions. Any projects that use this more heavily can use a wrapper that catches the exception and returns 0, kills the process, or does any other kind of error handling.
Again, if we're concerned that porting to a better API is too hard, then I vote for keeping the existing imperfect API, with a leading underscore, while (and after) we add API without known issues.
Cython's use of the time API looks trivally portable to fallible functions
What do you mean by "trivally portable to fallible functions"? Cython calls _PyTime_GetSystemClock()
and _PyTime_AsSecondsDouble()
in Cython/Includes/cpython/time.pxd
, in the function:
cdef inline double time() nogil:
This function doesn't hold the GIL and always return a C double
.
Again, if we're concerned that porting to a better API is too hard, then I vote for keeping the existing imperfect API, with a leading underscore, while (and after) we add API without known issues.
Why do you say "too hard"?
I'm not sure that there is an advantage of having a C API which behaves exactly like time.monotonic_ns()
: need to hold the GIL and can raise an exception. If you hold the GIL and can report errors, well, just call time.monotonic_ns()
directly. The only advantage would be to avoid having to import the time
module and to get the time.monotonic_ns
attribute.
The proposed API is different, it has less constraints, it can be used without the GIL and it doesn't require to report errors. So it's more efficient and can be used in functions which cannot or don't want to report errors.
PHP has exceptions; all its functions can fail.
PHP cannot raise exceptions when reading time. Example with gettimeofday()
and microtime()
. That's the code which is running Wikipedia :-)
if (gettimeofday(&tp, NULL)) {
ZEND_ASSERT(0 && "gettimeofday() can't fail");
}
So PHP users don't have to bother with errors on reading time.
Rust std::time::*::now() cannot return an error, but it may panic.
Right. It means of API, it means that the user doesn't have to both with this case which make the API more convenient to use. Apparently, errors on reading time is considered too unlikely to be reported in Rust as a regular std::io::Result
to report errors, but panic which exits the process.
Now we're definitely running in circles, repeating what's already been said. So I'll be brief:
[Cython's
time
] doesn't hold the GIL and always return a C double.
Right; its nogil
needs to be removed.
The same is true with the proposed API: in general, you need to hold the GIL before calling C-API.
Or did you want to propose a guarantee that the GIL doesn't need to be held?
The proposed API is different, it has less constraints, it can be used without the GIL and it doesn't require to report errors. So it's more efficient and can be used in functions which cannot or don't want to report errors.
I see no demand for such functions to be public API, as far as I can see.
I'm not sure that there is an advantage of having a C API which behaves exactly like time.monotonic_ns()
Good point. Let's not add it, then.
Message of @da-woods on the issue: https://github.com/python/cpython/issues/110850#issuecomment-1921913629
The exceptions don't represent a big change - when calling
cdef inline double time() nogil
Cython assumes that a return value of-1
represents a possible exception. Practically that never happens right now, but in principle the signature is of a function that might throw.Having the GIL is a slightly bigger change. We could make it work and keep the same external interface with a
with gil:
block insidetime()
though (although it wouldn't run well in parallel). I think this is something we could live with.
In short, Cython would be fine with an API which requires to hold the GIL and can fail.
But @da-woods has a concern about parallelism if the API changes to require the GIL: "although it wouldn't run well in parallel". The current API doesn't require to hold the GIL, and so can scale better with multiple threads.
Pro/cons of proposed API which doesn't require to hold the GIL and cannot fail.
Advantages:
nogil
.Disadvantages:
Reporting failures with Py_FatalError() was discussed, but not proposed seriously. Using Py_FatalError() when Python is embedded is really bad.
Apparently, PyTime_Monotonic
can fail (return 0) on a Tier 3 platform. The zero leaks to the Python API:
IMO, this is bad: the Python API should definitely raise an exception. If we do add fallible functions, and switch Python wrappers to them, there will need to be a deprecation period in which the Python API ignores exceptions.
Ok, I think that the API has now been discussed in length, and we got the feedback from a Cython developer. It's now time to vote! I propose 3 options:
While I prefer an API which cannot fail, I would be fine with an API which can fail.
I plan to work on a PR for the fallible API today. @zooba, please stop me if you disagree in general.
I plan to work on a PR for the fallible API today.
Hum, I already have a PR https://github.com/python/cpython/pull/112135 as written in the first message. I plan to update it to report errors once the vote completes (waiting for @zooba). Obviously, review and help is welcomed ;-)
Working on this, I noticed PyTime_*
is already used. Should we perhaps switch switch to something like:
PyClock_ns_t
, PyClock_ns_MIN
, PyClock_ns_MAX
PyClock_ns_Monotonic
, etcPyClock_ns_AsSecondsDouble
(Nanoseconds are abbreviated ns, in lowercase, which of course conflicts with our naming conventions. Out of many possible capitalization variants, the above seems clearest to me.)
Working on this, I noticed PyTime_* is already used.
I chose the PyTime_
prefix since it's the C API of the Python time
module. Is it really a blocker issue to reuse the same prefix than the datetime C API? I would prefer to stick to PyTime_
prefix if possible.
The datetime C API has no prefix, but use Py<type>_<method>()
convention, and yeah, there are are few PyTime_ methods in this datetime C API:
PyTime_Check()
PyTime_CheckExact()
PyTime_FromTime()
PyTime_FromTimeAndFold()
Is it really a blocker issue to reuse the same prefix than the datetime C API?
No, it's not. It's just something we might want to do.
PyClock_ns_t, PyClock_ns_MIN, PyClock_ns_MAX
The PyTime_t
type doesn't stick strictly to nanoseconds, it's designed to support different resolution. But in practice, yeah, it's a number of nanoseconds.
I designed the API so that tomorrow, it can change to 128-bit integer and better resolution such as picoseconds (10^−12) for example. Or stick to 64-bit, but use a resolution of 100 nanoseconds on Windows, for example.
Right, the design proposed here doesn't match the docs in the PR :(
I suppose we need a int PyTime_AsNanoseconds(PyTime_t t, int64_t *result)
, then, to suggest that users shouldn't depend on this implementation detail.
Tomorrow we can guarantee a resolution & range of at least nanoseconds & int64_t
, but it can overflow.
I suppose we need a int PyTime_AsNanoseconds(PyTime_t t, int64_t *result)
I decided to only propose a minimum API for now:
The API can be extended later to add "from nanoseconds" and "to nanoseconds" functions if needed.
I want more nuance than those voting categories, so round these up/down to +/-1 as whoever is doing the work feels appropriate:
I don't have particular strong opinions on this case though, so I'll let Petr be my voting proxy.
(i.e. if you don't need a running interpreter or the GIL to use it, it's probably not our job to provide it)
We disagree in general. But let's take that elsewhere :)
The API can be extended later to add "from nanoseconds" and "to nanoseconds" functions if needed.
I disagree. If we don't add the function now, people will simply use PyTime_t as nanoseconds. It'll work, and there'll be no better way. If we do add the function, people will probably also use PyTime_t as nanoseconds -- we can't stop them -- but it'll at least be very clear what they should do instead.
So, my PR (which builds on Victor's) adds a PyTime_AsNanoseconds
: https://github.com/python/cpython/pull/115215
OK, I'm fine with keeping PyTime_t
as uint64
of nanoseconds.
If Python switches to a 128-bit number of picoseconds, we will need to add new API, with names like PyTime_ps128_t
or something.
The functions added here will start failing in a couple of centuries (or when system clocks are set to such a time), which they can do since they're fallible.
This is a change from the original proposal, and I don't want to side-step anyone, so let's vote:
PyTime_t
is int64
of nanoseconds@zooba
(by proxy)(I hope Steve's comment applies to this vote too.)
(Bringing @zooba back in, I guess:)
IMO, this API should be, in the long term, part of the Platform adaptation layer from Steve's 2019 proposal. CPython implements this API for POSIX and Windows; if you port to another platform you'll need to provide it yourself. Anyway, in general this layer should be exposed to users.
Currently we don't have layers, so it'll “just” be part of the public API.
I've merged https://github.com/python/cpython/pull/115215 Thank you for the discussion! Hopefully we don't need to revisit this for a few centuries :)
Thanks for helping me to design the last bits of the public PyTime C API :-) While it's not strictly "needed" by Python to expose such API to get "portable clocks", apparently, some users like it enough to use it even if it's private.
I hesitated a lot over the last years to stick to nanoseconds and 64-bit or not. I don't t think that the int128_t type is widely available. Maybe it's going to change with C23, but Python 3.13 targets (a subset of) C99.
Describing PyTime_t as a 64-bit signed integer storing nanoseconds, the API is way clearer this way!
So thanks everybody, especially @encukou for finishing my PR to expose the API.
Heads-up: Cython cannot use the new API: https://github.com/python/cpython/issues/110850#issuecomment-1958968352
Note that this new implementation does not help Cython because we expose the functions as not requiring the GIL. The new interface requires the GIL because it allows exceptions to be raised.
This leaves us with
_PyTime_TimeUnchecked()
, which is still not a public function.
In short, Cython would be fine with an API which requires to hold the GIL and can fail.
I'm happy with the "can fail" bit, but why require the "get current time" function to hold the GIL? As it stands with Py3.13a4, users who are not interested in propagating exceptions now have to acquire the GIL, call the function, clear exceptions, release the GIL, handle errors. Even worse, they may have to acquire the GIL, store away the exception state, call the function, restore exceptions, release the GIL, handle errors. That's a lot of work for something as simple as "give me the current time".
As far as I can tell, there are only two error cases: "overflow", and "error is in errno". Both are easy to handle on user side. Why does the py_get_system_clock
function have to raise an exception at all? Users can easily do that themselves.
I propose the following interface for PyTime_Time()
:
PyTime_t
result argument.OverflowError
if they want toPyErr_SetFromErrno()
if they want to.That way, users can easily check for "< 0" if they are not interested in the kind of error, and handle -1 and -2 independently if they feel like handling or distinguishing the two and/or raising an exception.
If Cython cannot use the API, I see two options:
Unchecked
flavor which silently ignores error and doesn't report errors to the caller.I don't want to add an API which reports errors with errno
, since we may read time with functions which doesn't report as errno. As you wrote, there are other cases than "reading time failed", such as "overflow error". I also dislike "enum-like" error codes to report errors to the caller :-(
IMO either you care about errors and an exception is the way to go in the C API, or you don't care about the errors at all.
on overflow, return -1, let users raise OverflowError if they want to
If the user wants OverflowError, just use the API which raises exceptions, no? Or is it to avoid holding the GIL in the "fast path" and having a "slow path" which can raise exceptions?
Example in the Python main branch of code which doesn't want to bother with errors or really cannot handle errors:
vstinner@mona:~/python/main$ grep 'PyTime_.*Unchecked' $(find -name "*.c"|grep -v pytime.c)
./Modules/_testinternalcapi/test_lock.c:#include "pycore_time.h" // _PyTime_MonotonicUnchecked()
./Modules/_testinternalcapi/test_lock.c: PyTime_t start = _PyTime_MonotonicUnchecked();
./Modules/_testinternalcapi/test_lock.c: PyTime_t end = _PyTime_MonotonicUnchecked();
./Modules/_lsprof.c: return _PyTime_PerfCounterUnchecked();
./Python/import.c:#include "pycore_time.h" // _PyTime_PerfCounterUnchecked()
./Python/import.c: t1 = _PyTime_PerfCounterUnchecked();
./Python/import.c: PyTime_t cum = _PyTime_PerfCounterUnchecked() - t1;
./Python/lock.c:#include "pycore_time.h" // _PyTime_MonotonicUnchecked()
./Python/lock.c: PyTime_t now = _PyTime_MonotonicUnchecked();
./Python/lock.c: PyTime_t now = _PyTime_MonotonicUnchecked();
./Python/parking_lot.c:#include "pycore_time.h" //_PyTime_MonotonicUnchecked()
./Python/parking_lot.c: PyTime_t deadline = _PyTime_Add(_PyTime_MonotonicUnchecked(), timeout);
./Python/parking_lot.c: PyTime_t deadline = _PyTime_Add(_PyTime_TimeUnchecked(), timeout);
./Python/parking_lot.c: PyTime_t deadline = _PyTime_Add(_PyTime_TimeUnchecked(), timeout);
./Python/gc.c:#include "pycore_time.h" // _PyTime_PerfCounterUnchecked()
./Python/gc.c: t1 = _PyTime_PerfCounterUnchecked();
./Python/gc.c: double d = PyTime_AsSecondsDouble(_PyTime_PerfCounterUnchecked() - t1);
./Python/gc_free_threading.c: t1 = _PyTime_PerfCounterUnchecked();
./Python/gc_free_threading.c: double d = PyTime_AsSecondsDouble(_PyTime_PerfCounterUnchecked() - t1);
Note: the "Unchecked" flavor already exists, it's just an internal C API for now.
I agree that no error handling is probably acceptable for a function reading the current time. You wouldn't expect that to fail arbitrarily. Thus, if you search the latest CPython master for checked and unchecked function usage, both come out pretty much equal. I can live with unchecked access myself. That's what we previously had as well, after all, with the "non-officially public" _PyTime
API.
What exactly are we promising if we say you don't need to hold the GIL? Can you, for example, call these functions before Python is initialized?
When ignoring errors, what is the advantage over standard C functions from <time.h>
? (Perhaps better resolution? But I don't think Cython uses that...)
When ignoring errors, what is the advantage over standard C functions from
?
PyTime API works on all platforms supported by Python, it's portable. Also, it provides 3 clocks which are not trivial to access in a portable way.
PyTime API works on all platforms supported by Python, it's portable
How? Because you checked every platform that we currently support and are assuming that new ones will have equivalent OS functionality? Or because there's something specific about the implementation that makes them independent from the OS they're running on?
If we later add a platform that can't support one of these functions, what happens? Or would we just say "sorry, you don't have a high precision counter, you can't run Python"?
Since the discussion restarted, I reopen the issue.
Because you checked every platform that we currently support and are assuming that new ones will have equivalent OS functionality? If we later add a platform that can't support one of these functions, what happens? Or would we just say "sorry, you don't have a high precision counter, you can't run Python"?
Python needs 2 clocks: system clock and monotonic clock. Once I saw that Hurd doesn't support monotonic clocks, but Python doesn't support Hurd. IMO it's a reasonable requirement to provide these two clocks.
In the worst case, a platform can make the choice of using a non-motonic clock for time.monotonic(), Python cannot make up new clocks. It's too much work.
These APIs are not about precision. If a platform only supports a resolution of 1 second, that's fine. For example, time.perf_counter() only exposes the available clock with the best resolution. It doesn't have to be 1 nanosecond. Windows has a resolution of 100 nanoseconds for example. It can be worse on other platforms.
Or because there's something specific about the implementation that makes them independent from the OS they're running on?
The PyTime_t format makes it easier to have an API with nanosecond resolution. From my point of view, it's more convenient than having to use directly QueryPerformanceCount() / QueryPerformanceFrequency(), mach_absolute_time() / mach_timebase_info(), clock_gettime(CLOCK_MONOTONIC), etc. Especially when you want other operations such as time+timeout, or t2-t1. For example, it's non trivial to compute the difference of two timespec
values, you have to handle negative difference of the timespec.tv_nsec
member.
Perhaps I'm misreading the reply, but your answer looks like "yes, I checked, and we just don't support the platforms that don't have this API".
If that's policy going forward, then sure, wrap them up with a convenient conversion. But I want to be sure that's our platform support policy, and we're okay with just not supporting platforms that don't support monotonic clocks.
I'll repeat my question so it doesn't get lost:
What exactly are we promising if we say you don't need to hold the GIL? Can you, for example, call these functions before Python is initialized?
If we expose such GIL-less functions without spelling out what that actually means, we're setting users up for breakage. IMO the exact constraints are important here.
In addition to providing good API, we also want to avoid breaking Cython. Hence my second question: why not use <time.h>
? Here I'm asking specifically about Cython's use of the API, not about monotonic/nanosecond clocks.
On WASI (a supported platform), as far as I understand it, clocks and all the various kinds thereof are:
So:
If we later add a platform that can't support one of these functions
That's not a theoretical concern. Right now -- in the 3.13-3.15 or so timeframe -- we're either doing that or deciding that we won't do it.
IMO, CPython should not promise to expose infallible clocks to users.
If Cython needs the API (and can't use <time.h>
), let's expose it as unstable (promising that we'll support it as long as it's feasible, but no longer), or bring back the private/underscored name (hinting that we don't really like what Cython is doing).
In the past, Python checked once at startup that reading system clock and monotonic clock doesn't fail. So later, other calls don't have to check for failure. Maybe we can add that again.
Python doesn't work if these clocks are missing anyway. It's just about how we provide feedback to user to help them to fix their issue.
In the past, Python checked once at startup that reading system clock and monotonic clock doesn't fail. So later, other calls don't have to check for failure. Maybe we can add that again.
That sounds like a reasonable level of safety to me, and would simply the API considerably.
I wrote PR https://github.com/python/cpython/pull/115973 for a concrete implementation of my proposed API.
The issue #15 discuss adding PyTime APIs without the GIL.
This issue added PyTime APIs with the GIL. I close the issue.
API:
PyTime_t
PyTime_MIN
(PyTime_t
type)PyTime_MAX
(PyTime_t
type)double PyTime_AsSecondsDouble(PyTime_t t)
: Convert a timestamp to a number of seconds.PyTime_t PyTime_Monotonic(void)
: similar totime.monotonic_ns()
PyTime_t PyTime_PerfCounter(void)
: similar totime.perf_counter_ns()
PyTime_t PyTime_Time(void)
: similar totime.time_ns()
PyTime_Monotonic()
,PyTime_PerfCounter()
andPyTime_Time()
return 0 on error (and ignore silently the error), and clamp the clock to the[PyTime_MIN; PyTime_MAX]
range on integer overflow.These functions are being used internally in Python since around Python 3.5 to avoid rounding issues (floating point <=> integer) at nanosecond resolution.
PyTime_t
is just a 64-bit signed integer.The "nanosecond" unit is not explicit in the API, the unit is "arbitrary" even if it's nanosecond in practice. The internal C API has functions to create PyTime_t values from seconds and from nanoseconds. I didn't add them in this inital public C API.
Cython started to use this API whereas the private API was removed in Python 3.13 alpha 1. Cython needs:
PyTime_t
typePyTime_Time()
PyTime_AsSecondsDouble()
The Python API reports errors as regular exceptions, whereas the C API silently ignores errors. When I designed and implemented PEP 418: Add monotonic time, performance counter, and process time functions, I was worried about errors while reading time. I added code at Python startup to read the 3 clocks (time, perf_counter, monotonic) and fail with a fatal error if any failed. Many years later, I removed the check since it never failed.
The C API is designed to be convenient to use, not to be "perfect" (report unlikely error). Over 10 years, I saw a single failure in a custom sandbox which blocked syscalls to read time. It was a single user on a very specific issue, and it was an issue in the sandbox config, not in Python. IMO returning 0 in the C API is the sane behavior in this case. Bothering all users to have to check for errors just for that would be overkill.
Example of usage:
The PyTime internal C API is way more complete, but I chose to start with the bare minimum for the public C API.
Pull request: https://github.com/python/cpython/pull/112135