Closed duribee closed 2 years ago
when the base clock which makes HW timestamp is not controllable on both phase and frequency, the adjustment can happen on software base. In the implementation, 'gptpclock_set_thisClock' is called at the domain initialization, at that time, if the clock device is RD only, it selects PTPCLOCK_SLAVE_SUB mode on the clock. at the top of gptpclock.c, there is a comment area, which explains about PTPCLOCK_SLAVE_SUB. The comment is not very clear to understand, but hopefully you could get some ideas. In the current open source version, it assumes the neighbor clock rate ratio is 1.0, so if the free running clock has big frequency difference it will increase inaccuracy level.
In my case I have two clocks. One free running clock which is used only for timestamping event messages and is never adjusted. And another one which is the one that I would like to synchronize, this one is controllable on both phase and frequency, and appears as a PHC (/dev/ptp0). By default, both clocks runs at the same frequency and at the same phase until the controlled clock is adjusted. To be honest I do not completely understand the clock description on gptpclock.c. Currently I'm running the stack with the CONF_SINGLE_CLOCK_MODE set to 1 because I only have one ptp clock in my system and later I will try the implementation as a time aware bridge with several ports and just one /dev/ptp0.
Do you have any intentions of adding the neighbotRateRatio calculation to the stack?
I was wondering also about the synchronization precision that you have obtained with the stack? I did some quick tests but using the same controlled clock for both event messages timestamping and for phase and frequency adjustment (which, regarding the standard IEEE 802.1AS-2020, should not be like that: 10.1.2.1 LocalClock entity) and the results where not quite good, the peak-to-peak with a master were somewhere around 2 microseconds.
In that case, the timestamp clock must be RD only, and a virtual clock (clock ID=0) is maintained inside gptpd in PTPCLOCK_SLAVE_SUB mode. To adjust a controllable clock to this clock, a simple way is writing a program to synchronize the clock by reading gptpmasterclock_getts64. Patching gptpclock.c to update the controllable clock whenever clockID=0 is updated is anther way and it must make more accurate adjustment.
We already have the neighborRateRatio calculation, but not yet updated the open source version. We will do sometime in future.
The accuracy level depends on hardware. Normally, it comes to around 1usec range. Optimization will improve it, but not so easy. HW with phy level timestamp will improve a lot.
Hello,
I'm trying to run the exelforce-gptp stack as a Time aware slave with hardware timestamps based on a free running clock as defined in IEEE 802.1AS. Is this feasible? or should I need to make the timestamps based on the controlled clock? Also, it looks like the slave clock is adjustment is done in computeGmRateRatio() function at clock_master_sync_receive_sm.c. Shouldn't this be located at updateSlaveTime() fuction at clock_slave_sync_sm.c?
Another question regarding the implementation is about the neighborRateRatio calculation. Based on the standard this value should be calculated in computePdelayRateRatio() at md_pdelay_req_sm.c but in the exelforce-gptp it always return 1.0. Is this missing?, or is it done in any other part?
Thanks a lot