How Precision Time Protocol is being deployed at Meta

Implementing Precision Time Protocol (PTP) at Meta permits us to synchronize the methods that drive our services right down to nanosecond precision. PTP’s predecessor, Community Time Protocol (NTP), supplied us with millisecond precision, however as we scale to extra superior methods on our approach to constructing the following computing platform, the metaverse and AI, we have to be certain that our servers are maintaining time as precisely and exactly as attainable. With PTP in place, we’ll have the ability to improve Meta’s applied sciences and packages — from communications and productiveness to leisure, privateness, and safety — for everybody, throughout time zones and around the globe.

The journey to PTP has been years lengthy, as we’ve needed to rethink how each the timekeeping {hardware} and software program function inside our servers and information facilities. 

We’re sharing a deep technical dive into our PTP migration and our improvements which have made it attainable

The case for PTP

Earlier than we dive into the PTP structure, let’s discover a easy use case for terribly correct timing, for the sake of illustration.

Think about a scenario wherein a shopper writes information and instantly tries to learn it. In giant distributed methods, chances are high excessive that the write and the learn will land on completely different back-end nodes.

If the learn is hitting a distant reproduction that doesn’t but have the most recent replace, there’s a likelihood the consumer is not going to see their very own write:

Precision Time Protocol
Schematic illustration of learn returning outdated info

That is annoying on the very least, however extra essential is that that is violating a linearizability assure that enables for interplay with a distributed system in the identical manner as with a single server.

The everyday approach to remedy that is to concern a number of reads to completely different replicas and look forward to a quorum resolution. This not solely consumes additional assets but additionally considerably delays the learn due to the lengthy community round-trip delay.

Including exact and dependable timestamps on a again finish and replicas permits us to easily wait till the reproduction catches up with the learn timestamp:

Precision Time Protocol
Schematic illustration of a commit-wait guaranteeing consistency assure (linearizability).

This not solely hurries up the learn but additionally saves tons of compute energy.

An important situation for this design to work is that every one clocks be in sync or that the offset between a clock and the supply of time be recognized. The offset, nonetheless, adjustments due to fixed correction, drifting, or easy temperature variations. For that goal, we use the notion of a Window of Uncertainty (WOU), the place we will say with a excessive likelihood the place the offset is. On this explicit instance, the learn must be blocked till the learn timestamp plus WOU.

One might argue that we don’t really want PTP for that. NTP will just do advantageous. Properly, we thought that too. However experiments we ran evaluating our state-of-the-art NTP implementation and an early model of PTP confirmed a roughly 100x efficiency distinction:

Precision Time Protocol
Commit-wait reads issued in opposition to PTP- and NTP-backed clusters.

There are a number of further use circumstances, together with occasion tracing, cache invalidation, privateness violation detection enhancements, latency compensation within the metaverse, and simultaneous execution in AI, a lot of which is able to tremendously scale back {hardware} capability necessities. This can maintain us busy for years forward.

Now that we’re on the identical web page, let’s see how we deployed PTP at Meta scale.

The PTP structure

Precision Time Protocol
Regional PTP structure.

After a number of reliability and operational opinions, we landed on a design that may be break up into three principal elements: the PTP rack, the community, and the shopper.

Buckle up — we’re going for a deep dive.

The PTP rack

This homes the {hardware} and software program that serves time to shoppers; the rack consists of a number of crucial elements, every of which has been fastidiously chosen and examined.

The antenna

The GNSS antenna is definitely one of many least appreciated elements. However that is the place the place time originates, not less than on Earth. 

We’re striving for nanosecond accuracy. And if the GNSS receiver can’t precisely decide the place, it won’t be able to calculate time. We’ve got to strongly take into account the signal-to-noise ratio (SNR). A low-quality antenna or obstruction to the open sky can lead to a excessive 3D location customary deviation error. For time to be decided extraordinarily precisely, GNSS receivers ought to enter a so-called time mode, which usually requires a <10m 3D error.

It’s completely important to make sure an open sky and set up a strong stationary antenna. We additionally get to get pleasure from some stunning views:

Precision Time Protocol
GNSS antenna in a Meta information heart location.

Whereas we have been testing completely different antenna options, a comparatively new GNSS-over-fiber know-how bought our consideration. It’s free from nearly all disadvantages — it doesn’t conduct electrical energy as a result of it’s powered by a laser by way of optical fiber, and the sign can journey a number of kilometers with out amplifiers. 

Contained in the constructing, it could possibly use pre-existing structured fiber and LC patch panels, which considerably simplifies the distribution of the sign. As well as, the sign delays for optical fiber are effectively outlined at roughly 4.9ns per meter. The one factor left is the delay launched by the direct RF to laser modulation and the optical splitters, that are round 45ns per field.

PTP
Huber-Suhner GNSS-over-fiber know-how examined in Meta’s Dublin workplace.

By conducting checks, we confirmed that the end-to-end antenna delay is deterministic (usually about just a few hundred nanoseconds) and may simply be compensated on the Time Equipment facet.

Time Equipment

The Time Equipment is the center of the timing infrastructure. That is the place time originates from the information heart infrastructure viewpoint. In 2021, we revealed an article explaining why we developed a brand new Time Equipment and why present options wouldn’t reduce it.

However this was principally within the context of NTP. PTP, however, brings even greater necessities and tighter constraints. Most significantly, we made a dedication to reliably help as much as 1 million shoppers per equipment with out hurting accuracy and precision. To attain this, we took a crucial have a look at lots of the conventional elements of the Time Equipment and thought actually exhausting about their reliability and variety.

The Time Card

PTP Time Card
Time Card.

To guard our infrastructure from a crucial bug or a malicious assault,we determined to start out diversification from the supply of time — the Time Card. Final time, we spoke lots concerning the Time Card design and the benefits of an FPGA-based resolution. Underneath the Open Compute Undertaking (OCP), we’re collaborating with distributors comparable to Orolia, Meinberg, Nvidia, Intel, Broadcom, and ADVA, that are all implementing their very own time playing cards, matching the OCP specification.

Oscillatord

The Time Card is a crucial element that requires particular configuration and monitoring. For this goal, we labored with Orolia to develop a disciplining software, referred to as oscillatord, for various flavors of the Time Playing cards. This has develop into the default software for:

  • GNSS receiver configuration: setting the default config, and adjusting particular parameters like antenna delay compensation. It additionally permits the disabling of any variety of GNSS constellations to simulate a holdover situation.
  • GNSS receiver monitoring: reporting variety of satellites, GNSS high quality, availability of various constellations, antenna standing, leap second, and so forth.
  • Atomic clock configuration: Totally different atomic clocks require completely different configuration and sequence of occasions. For instance, it helps SA53 TAU configuration for quick disciplining, and with mRO-50, it helps a temperature-to-frequency relation desk.
  • Atomic clock monitoring: Parameters comparable to a laser temperature and lock must be monitored totally, and quick selections have to be made when the values are outdoors of operational vary.

Successfully, the information exported from oscillatord permits us to determine whether or not the Time Equipment ought to take site visitors or must be drained.

Community card

Our final purpose is to make protocols comparable to PTP propagate over the packet community. And if the Time Card is the beating coronary heart of the Time Equipment, the community card is the face. Each time-sensitive PTP packet will get {hardware} timestamped by the NIC. This implies the PTP {Hardware} Clock (PHC) of the NIC have to be precisely disciplined.

If we merely copy the clock values from Time Card to the NIC, utilizing the phc2sys or an analogous software, the accuracy is not going to be almost sufficient. In actual fact, our experiments present that we’d simply lose ~1–2 microseconds whereas going by PCIe, CPU, NUMA, and so forth. The efficiency of synchronization over PCIe bus will dramatically enhance with the rising Precision Time Measurement (PTM) know-how, as the event and help for varied peripherals with this functionality is in progress.

For our utility, since we use NICs with PPS-in capabilities, we employed ts2phc, which copies clock values at first after which aligns the clock edges primarily based on a pulse per second (PPS) sign. This requires a further cable between the PPS output of the Time Card and the PPS enter of the NIC, as proven within the image under.

PTP
Quick cable between PPS-out of the Time Card and PPS-in of the NIC.

We always monitor offset and ensure it by no means goes out of a ±50ns window between the Time Card and the NIC:

Precision Time Protocol
Offset between the Time Card and the Community Card PHC.

We additionally monitor the PPS-out interface of the NIC to behave as a fail-safe and be certain that we really know what’s occurring with the PHC on the NIC.

ptp4u

Whereas evaluating completely different preexisting PTP server implementations, we skilled scalability points with each open supply and closed proprietary options, together with the FPGA-accelerated PTP servers we evaluated. At greatest, we might get round 50K shoppers per server. At our scale, this implies we must deploy many racks full of those units.

Since PTP’s secret sauce is the usage of {hardware} timestamps, the server implementation doesn’t must be a extremely optimized C program and even an FPGA-accelerated equipment.

We applied a scalable PTPv2 unicast PTP server in Go, which we named ptp4u, and open-sourced it on GitHub. With some minor optimizations, we have been in a position to help over 1 million concurrent shoppers per gadget, which was independently verified by an IEEE 1588v2 licensed gadget.

This was attainable by the easy however elegant use of channels in Go that allowed us to cross subscriptions round between a number of highly effective employees.

As a result of ptp4u runs as a course of on a Linux machine, we mechanically get all the advantages, like IPv6 help, firewall, and so forth., free of charge.

c4u

The ptp4u server has many configuration choices, permitting it to cross dynamically altering parameters comparable to PTP Clock Accuracy, PTP Clock Class, and a UTC offset — that’s at the moment set to 37 seconds (we’re  trying ahead this changing into a relentless) — right down to shoppers.

With a view to continuously generate these parameters, we applied a separate service referred to as c4u, which always screens a number of sources of data and compiles the lively config for ptp4u:

Precision Time Protocol
Schematic illustration of the c4u structure.

This offers us flexibility and reactivity if the atmosphere adjustments. For instance, if we lose the GNSS sign on one of many Time Home equipment, we are going to change the ClockClass to HOLDOVER and shoppers will instantly migrate away from it. It is usually calculating ClockAccuracy from many alternative sources, comparable to ts2phc synchronization high quality, atomic clock standing, and so forth.

We calculate the UTC offset worth primarily based on the content material of the tzdata bundle as a result of we cross Worldwide Atomic Time (TAI) right down to the shoppers.

Calnex Sentinel

We needed to verify our Time Home equipment are always and independently assessed by a well-established licensed monitoring gadget. Fortunately, we’ve already made loads of progress within the NTP area with Calnex, and we have been able to use an analogous strategy to  PTP.

We collaborated with Calnex to take their discipline gadget and repurpose it for information heart use, which concerned altering the bodily type issue and including help for options comparable to IPv6.

PTP
Calnex Sentinel 2.0 put in within the PTP rack.

We join the Time Equipment NIC PPS-out to the Calnex Sentinel, which permits us to observe the PHC of the NIC with nanosecond accuracy.

We are going to discover monitoring in nice element in “How we monitor the PTP structure,” under. 

The PTP community

PTP protocol

The PTP protocol helps the usage of each unicast and multicast modes for the transmission of PTP messages. For big information heart deployments, unicast is most well-liked over multicast as a result of it considerably simplifies community design and software program necessities.

Let’s check out a typical PTP unicast movement:

A shopper begins the negotiation (requesting unicast transmission). Due to this fact, it should ship: 

  • A Sync Grant Request (“Hey server, please ship me N Sync and Observe-Up messages per second with the present time for the following M minutes”)
  • An Announce Grant Request (“Hey server, please ship me X Announce messages per second along with your standing for the following Y minutes”)
  • A Delay Response Grant Request (“Hey server, I’m going to ship you Delay Requests — please reply with Delay Response packets for the following Z minutes”)
  1. The server must grant these requests and ship grant responses. 
  2. Then the server wants to start out executing subscriptions and sending PTP messages. 
  • All subscriptions are impartial of each other.
  • It’s on the server to obey the ship interval and terminate the subscription when it expires. (PTP was initially multicast solely, and one can clearly see the multicast origin on this design).
  • In two-step configuration, when the server sends Sync messages, it has to learn the TX {hardware} timestamp and ship a Observe-Up message containing that timestamp.
  1. The shopper will ship Delay Requests throughout the agreed-upon interval to find out the trail delay. The server must learn the RX {hardware} timestamp and return it to the shopper.
  2. The shopper must periodically refresh the grant, and the method repeats.

Schematically (only for the illustration), it is going to seem like this:

Precision Time Protocol
Schematic illustration of the two-step PTP alternate.

Clear clocks

We initially thought of leveraging boundary clocks in our design. Nevertheless, boundary clocks include a number of disadvantages and issues:

  • You want community tools or some particular servers to behave as a boundary clock. 
  • A boundary clock acts as a time server, creating higher demand for short-term stability and holdover efficiency.
  • Because the info has to cross by the boundary clocks from the time servers right down to the shoppers, we must implement particular help for this. 

To keep away from this extra complexity, we determined to rely solely on PTP clear clocks.

Clear clocks (TCs) allow shoppers to account for variations in community latency, guaranteeing a way more exact estimation of clock offset. Every information heart change within the path between shopper and time server experiences the time every PTP packet spends transiting the change by updating a discipline within the packet payload, the aptly named Correction Subject (CF).

PTP shoppers (additionally known as odd clocks, or OCs) calculate community imply path delay and clock offsets to the time servers (grandmaster clocks, or GMs) utilizing 4 timestamps (T1, T2, T3, and T4) and two correction discipline values (CFa and CFb), as proven within the diagram under:

Precision Time Protocol
Schematic illustration of the clear clock and correction discipline.
  • T1 is the {hardware} timestamp when the SYNC packet is distributed by the Time Server.
  • T2 is the {hardware} timestamp when the OC receives the SYNC packet.
  • CFa is the sum of the change delays recorded by every change (TC) within the path from time server to the shopper (for SYNC packet).
  • T3 is the {hardware} timestamp the delay request is distributed by the Shopper.
  • T4 is the {hardware} timestamp when the time server receives the delay request.
  • CFb is the sum of the change delays recorded by every change within the path from the Shopper to the time server (for Delay Request packet).

To grasp the affect of only one disabled clear clock on the best way between shopper and a server, we will look at the logs:

We are able to see the trail delay explodes, generally even changing into destructive which shouldn’t occur throughout regular operations. This has a dramatic affect on the offset, transferring it from ±100 nanoseconds to -400 microseconds (over 4000 occasions distinction). And the worst factor of all, this offset is not going to even be correct, as a result of the imply path delay calculations are incorrect.

Based on our experiments, trendy switches with giant buffers can delay packets for as much as a few milliseconds which is able to end in a whole bunch of microseconds of a path delay calculation error. This can drive the offset spikes and will probably be clearly seen on the graphs:

The underside line is that working PTP in datacenters within the absence of TCs results in unpredictable and unaccountable asymmetry within the roundtrip time. And the worst of all – there will probably be no easy approach to detect this. 500 microseconds could not sound like lots, however when clients anticipate a WOU to be a number of microseconds, this will result in an SLA violation.

The PTP Shopper

Timestamps

Timestamping the incoming packet is a comparatively outdated characteristic supported by the Linux kernel for many years. For instance software program (kernel) timestamps have been utilized by NTP daemons for years. It’s essential to grasp that timestamps are usually not included into the packet payload by default and if required, have to be positioned there by the consumer utility.

Studying RX timestamp from the consumer area is a comparatively easy operation. When packet arrives, the community card (or a kernel) will timestamp this occasion and embody the timestamp into the socket control message, which is simple to get together with the packet itself by calling a recvmsg syscall with MSG_ERRQUEUE flag set.

A really tough illustration of a socket management message containing timestamps.
128 bits 64 bits 64 bits 64 bits
Socket management message header Software program Timestamp Legacy Timestamp {Hardware} Timestamp

For the TX {Hardware} timestamp it’s a bit extra sophisticated. When sendto syscall is executed it doesn’t result in a direct packet departure and neither to a TX timestamp era. On this case the consumer has to poll the socket till the timestamp is precisely positioned by the kernel. Typically now we have to attend for a number of milliseconds which naturally limits the ship price.

{Hardware} timestamps are the key sauce that makes PTP so exact. A lot of the trendy NICs have already got {hardware} timestamps help the place the community card driver populates the corresponding part. 

It’s very straightforward to confirm the help by working the ethtool command:

$ ethtool -T eth0
Time stamping parameters for eth0:
Capabilities:
	hardware-transmit
	hardware-receive
	hardware-raw-clock
PTP {Hardware} Clock: 0
{Hardware} Transmit Timestamp Modes:
	off
	on
{Hardware} Obtain Filter Modes:
	none
	All

It’s nonetheless attainable to make use of PTP with software program (kernel) timestamps, however there gained’t be any robust ensures on their high quality, precision, and accuracy.

We evaluated this risk as effectively and even thought of implementing a change within the kernel for “faking” the {hardware} timestamps with software program the place {hardware} timestamps are unavailable. Nevertheless, on a really busy host we noticed the precision of software program timestamps jumped to a whole bunch of microseconds and we needed to abandon this concept.

ptp4l

ptp4l is an open supply software program able to appearing as each a PTP shopper and a PTP server. Whereas we needed to implement our personal PTP server resolution for efficiency causes, we determined to stay with ptp4l for the shopper use case.

Preliminary checks within the lab revealed that ptp4l can present wonderful synchronization high quality out of the field and align time on the PHCs within the native community right down to tens of nanoseconds.

Nevertheless, as we began to scale up our setup some points began to come up.

Edge circumstances

In a single explicit instance we began to note occasional “spikes” within the offset. After a deep dive we recognized elementary {hardware} limitations of some of the common NICs available on the market:

  • The NIC has solely a timestamp buffer for 128 packets.
  • The NIC is unable to differentiate between PTP packets (which want a {hardware} timestamp) and different packets which don’t.

This finally led to the legit timestamps being displaced by timestamps coming from different packets. However what made issues lots worse – the NIC driver tried to be overly intelligent and positioned the software program timestamps within the {hardware} timestamp part of the socket management message with out telling anybody.

It’s a elementary {hardware} limitation affecting a big portion of the fleet which is not possible to repair.

We needed to implement an offset outliers filter, which modified the conduct of PI servo and made it stateful. It resulted in occasional outliers being discarded and the imply frequency set throughout the micro-holdover:

If not for this filter, ptp4l would have steered PHC frequency actually excessive, which might end in a number of seconds of oscillation and dangerous high quality within the Window of Uncertainty we generate from it.

One other concern arose from the design of BMCA. The aim of this algorithm is to pick out one of the best Time Equipment when there  are a number of to select from within the ptp4l.conf. It does by evaluating  a number of attributes equipped by Time Servers in Announce messages:

  1. Precedence 1
  2. Clock Class
  3. Clock Accuracy
  4. Clock Variance
  5. Precedence 2
  6. MAC Deal with

The issue manifests itself when all aforementioned attributes are the identical. BMCA makes use of Time ApplianceMAC tackle because the tiebreaker which suggests underneath regular working situations one Time Server will appeal to all shopper site visitors.

To fight this, we launched a so-called “sharding” with completely different PTP shoppers being allotted to completely different sub-groups of Time Home equipment from the whole pool.

Precision Time Protocol
Schematic illustration of sharding.

This solely partially addressed the difficulty with one server in every subgroup taking the whole load for that grouping. The answer was to allow shoppers to specific a desire, and so we launched Priority3 into the choice standards simply above the MAC tackle tiebreaker.  Which means that shoppers configured to make use of the identical Time Home equipment can want completely different servers.

Shopper 1:

[unicast_master_table]

UDPv6 time_server1 1

UDPv6 time_server2 2

UDPv6 time_server3 3

Shopper 2:

[unicast_master_table]

UDPv6 time_server2 1

UDPv6 time_server3 2

UDPv6 time_server1 3

This ensures we will distribute load evenly throughout all Time Home equipment underneath regular working situations.

One other main problem we confronted was guaranteeing PTP labored with multi-host NICs – a number of hosts sharing the identical bodily community interface and due to this fact a single PHC. Nevertheless, ptp4l has no information of this and tries to self-discipline the PHC like there aren’t any different neighbors.

Some NIC producers developed a so-called “free working” mode the place ptp4l is simply  disciplining the method contained in the kernel driver. The precise PHC will not be affected and retains working free. This mode ends in a barely worse precision, nevertheless it’s utterly clear to ptp4l

Different NIC producers solely help a “actual time clock” mode, when the primary host to seize the lock really disciplines the PHC. The benefit here’s a extra exact calibration and better high quality holdover, nevertheless it results in a separate concern with ptp4l working on the opposite hosts utilizing the identical NIC as makes an attempt to tune PHC frequency haven’t any affect, resulting in inaccurate clock offset and frequency calculations.

PTP profile

To explain the datacenter configuration, we’ve developed and published a PTP profile, which displays the aforementioned edge circumstances and lots of extra.

Different PTP shoppers

We’re evaluating the potential for utilizing another PTP shopper. Our principal standards are:

  • Help our PTP profile
  • Meets our synchronization high quality necessities
  • Open supply

We’re evaluating the Timebeat PTP shopper and, up to now, it appears very promising.

Repeatedly incrementing counter

Within the PTP protocol, it doesn’t actually matter what time we propagate so long as we cross a UTC offset right down to the shoppers. In our case, it’s Worldwide Atomic Time (TAI), however some folks could select UTC. We like to consider the time we offer as a constantly incrementing counter.

At this level we’re not disciplining the system clock and ptp4l is solely used to self-discipline the NIC’s PHC.

fbclock

Synchronizing PHCs throughout the fleet of servers is sweet, nevertheless it’s of no profit except there’s a approach to learn and manipulate these numbers on the shopper.

For this goal, we developed a easy and light-weight API referred to as fbclock that gathers info from PHC and ptp4l and exposes straightforward digestible Window Of Uncertainty info:

Fbclock structure

 

By a really environment friendly ioctl PTP_SYS_OFFSET_EXTENDED, fbclock will get a present timestamps from the PHC, newest information from ptp4l after which applies math method to calculate the Window Of Uncertainty (WOU):

$ ptpcheck fbclock
"earliest_ns":1654191885711023134,"latest_ns":1654191885711023828,"wou_ns":694

As you may even see, the API doesn’t return the present time (aka time.Now()). As an alternative, it returns a window of time which accommodates the precise time with a really excessive diploma of likelihood On this explicit instance, we all know our Window Of Uncertainty is 694 nanoseconds and the time is between (TAI) Thursday June 02 2022 17:44:08:711023134 and Thursday June 02 2022 17:44:08:711023828.

This strategy permits clients to attend till the interval is handed to make sure precise transaction ordering.

Error sure measurement

Measuring the precision of the time or (Window Of Uncertainty) implies that alongside the delivered time worth, a window (a plus/minus worth) is introduced that’s assured to incorporate the true time to a excessive degree of certainty. 

How sure we must be is decided by how crucial it’s that the time be right and that is pushed by the particular utility.

In our case, this certainty must be higher than 99.9999% (6-9s). At this degree of reliability you’ll be able to anticipate lower than 1 error in 1,000,000 measurements.

The error price estimation makes use of commentary of the historical past of the information (histogram) to suit a likelihood distribution perform (PDF). From the likelihood distribution perform one can calculate the variance (take a root sq. and get the usual deviation) and from there it will likely be easy multiplication to get to the estimation of the distribution primarily based on its worth.

Precision Time Protocol

Under is a histogram taken from the offset measurement from ptp4l working on the odd clock.

To estimate the entire variance (E2E) it’s essential to know the variance of the time error gathered by the point server all the best way to the tip node NIC. This consists of GNSS, atomic clock, and Time Card PHC to NIC PHC (ts2phc). The producer supplies the GNSS error variance. Within the case of the UBX-F9T it’s about 12 nanoseconds. For the atomic clock the worth relies on the disciplining threshold that we’ve set. The tighter the disciplining threshold, the smaller offset variance however decrease holdover efficiency. On the time of working this experiment, the error variance of the atomic clock has been measured to 43ns (customary deviation, std). Lastly, the software ts2phc will increase the variance by 30ns (std) leading to a complete variance of 52ns.

The noticed outcomes matches the calculated variance obtained by the “Sum of Variance Regulation.”

Based on the sum of variance legislation, all we have to do is so as to add all of the variance. In our case, we all know that the entire observer E2E error (measured by way of the Calnex Sentinel) is about 92ns.

On the opposite arms for our estimation, we will have the next:

Estimated E2E Variance = [GNSS Variance + MAC Variance + ts2phc Variance] + [PTP4L Offset Variance] = [Time Server Variance] + [Ordinary Clock Variance]

Plugging within the values:

Estimated E2E Variance = (12ns 2) + (43ns2) + (52ns2) + (61ns2) = 8418, which corresponds to 91.7ns

These outcomes present that by propagating the error variance down the clock tree, the E2E error variance will be estimated with a very good accuracy. The E2E error variance can be utilized to calculate the Window Of Uncertainty (WOU) primarily based on the next desk.

Merely, by multiplying the estimated E2E error variance in 4.745 we will estimate the Window Of Uncertainty for the likelihood of 6-9s.

For our given system the likelihood of 6-9s is about 92ns x 4.745 = 436ns

Which means that given a reported time by PTP, contemplating a window dimension of 436ns round worth ensures to incorporate the true time by a confidence of over 99.9999%.

Compensation for holdover

Whereas all of the above appears logical and nice, there’s a large assumption there. The idea is that the connection to the open time server (OTS) is offered, and every little thing is in regular operation mode. Numerous issues can go fallacious such because the OTS happening, change happening, Sync messages not behaving as they’re purported to, one thing in between decides to get up the on-calls and so forth. In such a scenario the error sure calculation ought to enter the holdover mode. The identical issues apply to the OTS when GNSS is down. In such a scenario the system will improve the Window Of Uncertainty primarily based on a compound price. The speed will probably be estimated primarily based on the soundness of the oscillator (scrolling variance) throughout regular operations. On the OTS the compound price will get adjusted by the correct telemetry monitoring of the system (Temperature, Vibration, and so forth). There’s a truthful quantity of labor when it comes to calibrating coefficients right here and attending to one of the best final result and we’re nonetheless engaged on these advantageous tunings. 

Through the durations of community synchronization availability, the servo is continually adjusting the frequency of the native clock on the shopper facet (assuming the preliminary stepping resulted in convergence). A break within the community synchronization (from shedding connection to the time server or the time server itself happening) will depart the servo with a final frequency correction worth. Consequently, such worth will not be aimed to be an estimation of precision of the native clock however as a substitute a short lived frequency adjustment to scale back the time error (offset) measured between the cline and the time server.

Due to this fact, it’s essential to first account for synchronization loss durations and use one of the best estimation of frequency correction (normally, the scrolling common of earlier correction values) and second, account for the error sure improve by trying on the final correction worth and evaluating it with the scrolling common of earlier correction values.

How we monitor the PTP structure

Monitoring is likely one of the most essential elements of the PTP structure. Because of the nature and affect of the service, we’ve spent fairly a little bit of time engaged on the tooling.

Calnex

We labored with the Calnex crew to create the Sentinel HTTP API, which permits us to handle, configure, and export information from the gadget. At Meta, we created and open-sourced an API command line software permitting human and script pleasant interactions.

Utilizing Calnex Sentinel 2.0 we’re in a position to monitor three principal metrics per time equipment — NTP, PTP, and PPS.

Precision Time Protocol

This enables us to inform engineers about any concern with the home equipment and exactly detect the place the issue is. 

For instance, on this case each PTP and PPS monitoring resorts in a roughly lower than 100 nanosecond variation over 24 hours when NTP stays inside 8 microseconds.

ptpcheck

With a view to monitor our setup, we applied and open-sourced a software referred to as ptpcheck. It has many alternative subcommands, however essentially the most attention-grabbing are the next:

diag

Shopper subcommand supplies an general standing of a ptp shopper. It experiences the time of receipt of final Sync message, clock offset to the chosen time server, imply path delay, and different useful info:

$ ptpcheck diag
[ OK ] GM is current
[ OK ] Interval since final ingress is 972.752664ms, we anticipate it to be inside 1s
[ OK ] GM offset is 67ns, we anticipate it to be inside 250µs
[ OK ] GM imply path delay is 3.495µs, we anticipate it to be inside 100ms
[ OK ] Sync timeout depend is 1, we anticipate it to be inside 100
[ OK ] Announce timeout depend is 0, we anticipate it to be inside 100
[ OK ] Sync mismatch depend is 0, we anticipate it to be inside 100
[ OK ] FollowUp mismatch depend is 0, we anticipate it to be inside 100

fbclock

Shopper subcommand that enables querying of an fbclock API and getting a present Window of Uncertainty:

$ ptpcheck fbclock
"earliest_ns":1654191885711023134,"latest_ns":1654191885711023828,"wou_ns":694

sources

Chrony-style shopper monitoring, permits to see all Time Servers configured within the shopper configuration file, their standing, and high quality of time.

$ ptpcheck sources
+----------+----------------------+--------------------------+-----------+--------+----------+---------+------------+-----------+--------------+
| SELECTED |       IDENTITY       |         ADDRESS          |   STATE   | CLOCK  | VARIANCE |  P1:P2  | OFFSET(NS) | DELAY(NS) |  LAST SYNC   |
+----------+----------------------+--------------------------+-----------+--------+----------+---------+------------+-----------+--------------+
| true     | abcdef.fffe.111111-1 | time01.instance.com.      | HAVE_SYDY | 6:0x22 | 0x59e0   | 128:128 |         27 |      3341 | 868.729197ms |
| false    | abcdef.fffe.222222-1 | time02.instance.com.      | HAVE_ANN  | 6:0x22 | 0x59e0   | 128:128 |            |           |              |
| false    | abcdef.fffe.333333-1 | time03.instance.com.      | HAVE_ANN  | 6:0x22 | 0x59e0   | 128:128 |            |           |              |
+----------+----------------------+--------------------------+-----------+--------+----------+---------+------------+-----------+--------------+

oscillatord

Server subcommand, permits to learn a abstract from the Time Card.

$ ptpcheck oscillatord
Oscillator:
	mannequin: sa5x
	fine_ctrl: 328
	coarse_ctrl: 10000
	lock: true
	temperature: 45.33C
GNSS:
	repair: Time (3)
	fixOk: true
	antenna_power: ON (1)
	antenna_status: OK (2)
	leap_second_change: NO WARNING (0)
	leap_seconds: 18
	satellites_count: 28
	survey_in_position_error: 1
Clock:
	class: Lock (6)
	offset: 1

For instance, we will see that the final correction on the Time Card was simply 1 nanosecond.

phcdiff

This subcommand permits us to get a distinction between any two PHCs:

$ ptpcheck phcdiff -a /dev/ptp0 -b /dev/ptp2
PHC offset: -15ns
Delay for PHC1: 358ns
Delay for PHC2: 2.588µs

On this explicit case the distinction between Time Card and a NIC on a server is -15 nanoseconds.

Shopper API

It’s good to set off monitoring periodically or on-demand, however we wish to go even additional. We wish to know what the shopper is definitely experiencing. To this finish, we embedded a number of buckets proper inside the fbclock API primarily based on atomic counters, which increment each time the shopper makes a name to an API:

Precision Time Protocol
Schematic illustration of fbclock API monitoring.

This enables us to obviously see when the shopper experiences a difficulty — and infrequently earlier than the shopper even notices it.

Linearizability checks

PTP protocol (and ptp4l particularly) don’t have a quorum choice course of (in contrast to NTP and chrony). This implies the shopper picks and trusts the Time Server primarily based on the data supplied by way of Announce messages. That is true even when the Time Server itself is fallacious.

For such conditions, now we have applied a final line of protection referred to as a linearizability verify.

Think about a scenario wherein a shopper is configured to make use of three time servers and the shopper is subscribed to a defective Time Server (e.g., Time Server 2):

Precision Time Protocol
Shopper following Time Server 2.

On this scenario, the PTP shopper will suppose every little thing is okay, however the info it supplies to the applying consuming time will probably be incorrect, because the Window of Uncertainty will probably be shifted and due to this fact inaccurate. 

To fight this, in parallel, the fbclock establishes communication with the remaining time servers and compares the outcomes. If nearly all of the offsets are excessive, this implies the server our shopper follows is the outlier and the shopper will not be linearizable, even when synchronization between Time Server 2 and the shopper is ideal.

Precision Time Protocol
Schematic illustration of linearizability verify.

PTP is for immediately and the longer term

We imagine PTP will develop into the usual for maintaining time in laptop networks within the coming many years. That’s why we’re deploying it on an unprecedented scale. We’ve needed to take a crucial have a look at our complete infrastructure stack — from the GNSS antenna right down to the shopper API — and in lots of circumstances we’ve even rebuilt issues from scratch.

As we proceed our rollout of PTP, we hope extra distributors who produce networking tools will make the most of our work to assist carry new tools that helps PTP to the business. We’ve open-sourced most of our work, from our supply code to our {hardware}, and we hope the business will be a part of us in bringing PTP to the world. All this has all been executed within the title of boosting the efficiency and reliability of the prevailing options, but additionally with a watch towards opening up new merchandise, providers, and options sooner or later. 

We wish to thank everybody concerned on this endeavor, from Meta’s inner groups to distributors and producers collaborating with us. Particular thanks goes to Andrei Lukovenko, who linked time fanatics.

This journey is only one % completed.