Closed ralphBellofattoSTX closed 4 years ago
test program:
include <iostream>
#include <unistd.h>
#include <pigpio.h>
using namespace std;
/**
* simple test to check for jitter at different frequencies.
*/
int main() {
#define INTERVAL 20 // 1us interval.
#define GPIO_TEST_PIN 4
gpioInitialise();
gpioSetMode(GPIO_TEST_PIN, PI_OUTPUT); // Set GPIO04 as output
cout << "testing " << INTERVAL << "us usleep" << endl;
unsigned ioValue = 0;
while (true) {
usleep(INTERVAL);
ioValue = !ioValue;
gpioWrite(GPIO_TEST_PIN, ioValue); // Set GPIO24 high.
}
}
where we set the INTERVAL to 1, 10, 20, 50, etc.. and use a scope to see the data.
INTERVAL = 1us
this suggest that a minimal interval should be about 100us... INTERVAL=100
INTERVAL=200
INTERVAL=500
INTERVAL=1000 (1 ms)
INTERVAL=5000 (5us)
INTERVAL=10000 (10us) probably our minimal freq due to reading the AD's anyway...
INTERVAL 12000 (1.2ms)
INTERVAL=14000 (1.4ms)
INTERVAL=20000 (20ms)
the lesson here, is that the actual time you usleep seems to be at least the minimum value, but may overshoot. even at 10ms, it may overshoot by as much as 4ms, and at 20ms, it may overshoot by 2-3ms...
This is livable, but the design must NOT count on the interval being consistant.
@ajo27 this might be of interest to you...
It shows us what we can rely on with usleep and what we can't.
Great work, yes this is how it seems to be on the Pi with Linux while in userspace unfortunately. From the reading I've done it seems like we can set delays/sleeps at intended us values and then check the actual value upon return to decide if/how we should correct.
One potential avenue since we're using pigpio -- gpioDelay() uses clock_nanosleep under the hood and returns the actual delay time, but if the delays are short enough (< 100 us) it uses a busy wait.
Collect notes on jitter measurements.