Closed joshuaulrich closed 6 years ago
Main consideration here is if xts
would be the class of choice for ultra low latency analysis in finance or other disciplines. If that is the case, as you have said earlier, support for more precise time indexing would be the solution. As I don't know the details of that solution, will leave it to you to weight implementation costs and potential issues with backwards compatibility.
I would like nanosecond resolution in xts, but that will take a bit of work. This problem could theoretically exist even with higher resolution index timestamps, but it would be less probable.
The current solution in this branch works around the issue by always checking and ensuring that the value of newindex[i]
is always greater than both index[i-1]
and newindex[i-1]
. The downside to this solution is that non-duplicate index values may change. I plan to add a warning whenever that happens before merging this branch and closing the issue. I can't think of a better general solution, but I'm open to suggestions.
This sounds correct to me.
It has always been possible that make.index.unique could push things past the next observed index, and that problem only got worse as reported data moved beyond millisecond precision.
Until xts supports nano-scale indexes, checking and warning the user that they may be doing something unintended with make.index.unique seems the best solution.
@joshuaulrich:
I would like nanosecond resolution in xts, but that will take a bit of work.
I showed a possible solution to this problem (https://github.com/joshuaulrich/xts/issues/190#issuecomment-306261054) and yes, it is a lot of work, but in my opinion it is worth taking up this effort. The key is to not break xts R API for external packages.
If you decide on the proposed solution, then of course I will be very committed to help as much as possible (especially in the case of C code).
make.index.unique()
should add a smalleps
to duplicate index values. This often works, but can fail when there are consecutive observations with the same timestamp, and first observation after the block of duplicate timestamps is less than the cumulative eps.It would be nice if this could be fixed, but floating point rounding error is likely to be a problem in the cases this may occur. A warning should be raised, at minimum.
Session Info