Closed mapellil closed 2 years ago
For a few reasons: 1) There is no way for the higher level API code to really know the low level timer interrupt frequency since this is configured on a port-by-port basis and can be modified by the developer. Sure, you can use #defines, but even then it's misleading to allow, say, a 1 ms interval when the timer interrupt is configured for 5ms or 10 ms. The only thing the API can guarantee is the requested number of ticks or timer interrupts that will occur 2) If I want to have an absolute time value, I can define it in the application headers based on how I have configured the timer interrupt. If the timer interrupt is 10 ms, I can #define ONE_SECOND 100, or FIVE_SECONDS 500, etc.. If I change my hardware tick rate later it's easy to change these #defines to match. 3) Other OS that allow settings like 1 ms are sort of lying. Even if you had your timer interrupt configured for 1 ms (unlikely) interval, that means you would actually have a delay value somewhere between 0 ms and 1 ms. That's a huge error margin. If you truly need a timer with that accuracy, the suggestion would be to configure your own hardware timer (assuming one is available).
Thank you for the excellent explanation, @jdeere5220. I will close this ticket now. Feel free to ask any more questions.
Hello, why not to use an absolute unit of measure for timers? Eg. mSec (absolute) instead of ticks (relative) in API like tx_thread_sleep(ULONG timer_ticks); or tx_timer_change(TX_TIMER *timer_ptr, ULONG initial_ticks, ULONG reschedule_ticks); Thanks