Hello,
I am a contributor of python-libfaketime (but no maintainer). This is a project that wraps libfaketime in python so it can be used in unit tests suits.
Testing for nanoseconds is not uncommon in unit testing, so python-libfaketime patches libfaketime to bring support for it:
diff --git a/src/libfaketime.c b/src/libfaketime.c
index e632395..09d9019 100644
--- a/src/libfaketime.c
+++ b/src/libfaketime.c
@@ -2384,10 +2384,16 @@ static void parse_ft_string(const char *user_faked_time)
user_faked_time_tm.tm_isdst = -1;
nstime_str = strptime(user_faked_time, user_faked_time_fmt, &user_faked_time_tm);
+ /* the actual format has a " %f" appended. Parse out the microseconds. */
+ char nanosecond_str[7];
+ memcpy(&nanosecond_str, user_faked_time + 20, 6);
+ nanosecond_str[6] = '\0';
+ int nanoseconds = atoi(nanosecond_str) * 1000;
+
if (NULL != nstime_str)
{
user_faked_time_timespec.tv_sec = mktime(&user_faked_time_tm);
- user_faked_time_timespec.tv_nsec = 0;
+ user_faked_time_timespec.tv_nsec = nanoseconds;
if (nstime_str[0] == '.')
{
--
2.45.0
And then python-libfaketime passes the time to mock to libfaketime with the expected format.
I thought this would be better if nanosecond resolution was implemented the right way here instead of a downstream patch.
Hello, I am a contributor of python-libfaketime (but no maintainer). This is a project that wraps libfaketime in python so it can be used in unit tests suits.
Testing for nanoseconds is not uncommon in unit testing, so python-libfaketime patches libfaketime to bring support for it:
And then python-libfaketime passes the time to mock to libfaketime with the expected format.
I thought this would be better if nanosecond resolution was implemented the right way here instead of a downstream patch.
What do you think?