If we look at any clock plot that starts of sufficiently closely to the convergence frequency, we can observe a certain "blockiness" or "discreteness" of the plots:
The step size corresponds to roughly 0.5 ppm, while the clock step size is 0.01 ppm (10 ppb). I don't think this is fundamental to our measurement method:
We sample every 5 ms, or every 5×125_000_000 = 625_000_000 (ideal) clock cycles.
0.5 ppm is therefore 312.5 clock cycles
10 ppb is therefore 6.25 clock cycles
While I can see there might be some synchronization logic messing with the precision around ~10 ppb, I don't think 0.5 ppm should be a cut-off point. Perhaps a good first step would be to remove all the type juggling in https://github.com/bittide/bittide-hardware/issues/609 to get a good view on where all the conversions happen.
If we look at any clock plot that starts of sufficiently closely to the convergence frequency, we can observe a certain "blockiness" or "discreteness" of the plots:
The step size corresponds to roughly 0.5 ppm, while the clock step size is 0.01 ppm (10 ppb). I don't think this is fundamental to our measurement method:
While I can see there might be some synchronization logic messing with the precision around ~10 ppb, I don't think 0.5 ppm should be a cut-off point. Perhaps a good first step would be to remove all the type juggling in https://github.com/bittide/bittide-hardware/issues/609 to get a good view on where all the conversions happen.