Closed gopherbot closed 9 years ago
Thanks for the suggestions. Things are a little busy today with the many bug reports, but I will take a closer look when they've calmed down. I think you're right that the time handling could be improved.
Labels changed: added package-change.
Owner changed to r...@golang.org.
Status changed to Thinking.
You might also consider the work djb has done, TAI64 and brethren. http://cr.yp.to/libtai/tai64.html http://cr.yp.to/proto/utctai.html
Comment 3 by gchk@david.stafford-ilie.us:
If you're going to represent time, my suggestion is 16 bits for the year (ranging from -32768 to +32767) and 48 bits of time since the beginning of the year, measured in units of 1E-7 (.0000001; a tenth of a microsecond; 100 nanoseconds), totalling 48 bits. It's pretty easy to calculate with, adjusts for leap seconds gracefully (you still need to know which years get them, and how many, and when, but at least your times don't move more than 2 seconds if you just ignore the issue, since we reset the clock at the start of each year). 16 bits for the year covers a reasonable range of historical and future dates. By the time we're looking at the Y32K issue, we should have 128 bits commonly available. In contrast, 64 bits of nanoseconds is useless for historical dates (ending in 1678) and is reasonably close for future dates as well (2262). I'd say that 100 nanoseconds is plenty of granularity for a time type. If a higher precision timing type is needed, use 64 bits of picoseconds. You get a range of over 200 days with picosecond resolution. It's easily converted to the above time type by dividing by 100000. This would be a separate type used for sleeps and timeouts.
Comment 4 by gchk@david.stafford-ilie.us:
One further note: starting the year on March 1st adds a trivial amount of complexity to a Gregorian date formatting routine, and greatly simplifies leap year handling in that formatter. Leap years just change the wrap value for the 100 nanosecond time part of the DateTime type.
64 Bits of TAI in a granularity of 0.1µs (spans more than 50 thousand years). 100 nanoseconds is fine enough for any timeout. TAI has no problems with leap seconds. Calculation UTC from TAI is manageable for a Processor that can manager 64 Bit integers. Representing the year in the upper 16 bit and use 48 bits for .1µs within the year, causes subtle bugs around midnight new year, when I want to party without IT calling me for a problem.
Comment 7 by jonas@pfenniger.name:
For those trying to decide how much bits to allocate for time: time is relative to many things and there is no such "optimal" representation. Your computer has many clocks with different guarantees, and more or less synchronized with an NTP server... or not. Time in interval and since an EPOCH are also very different. In the first case you probably need more precision while on the other, it's more often used for user representation. Seconds or minutes are enough. Birth of Jesus is not given at 0.1µs precision :-p To give an example, a common mistake is to use gettimeofday() to calculate a time interval. "gtod = gettimeofday(); while(gettimeofday() < gtod + 10000) { sleep(1); }" can hang for a long time if the user changes the system clock backwards since this clock source doesn't provide a monotonicity guarantee. time.Sleep() should accept a time interval with high precision (millis or so) and use a monotonic clock if possible to calculate that interval. calendar, a new package that does calendar calculations on a DateTime interface for user representation, so that application-specific time can also be handled by it. os.Time() returning a DateTime, with possibly a precision attribute. This is the start of a proposal but I think it's better than having a unique representation.
I added issue #512 which was later merged into this one. It states another problem which needs to be addressed. In what format should we hold nanoseconds? time.Nanoseconds() returns an int64 while os.Dir.Mtime is a uint64. That is another example of where time data is handled inconsistently, but it is also a different problem from this issue report.
There are two formats for times in Go: int64 nanoseconds since 1970, good for comparing and such, and time.Time, which breaks it down. The two are sufficiently different that I don't think it's important to merge them. Obviously more complex interfaces are possible but they're not clearly necessary. We've recently fixed the issue #512 complaint about uint64 vs int64.
Status changed to WorkingAsIntended.
CL https://golang.org/cl/35073 mentions this issue.
by vomjom: