Lets go in layers.
Step 1) Hardware
Your on board clock is a quartz counter built into the bios. This is very accurate and powered by an on board battery. This is referenced on start up, when the CPU activates its own clock for timing. This clock drifts because its roughly based on the CPU clock frequency which is dynamic. You really can't reach out and talk to the BIOS in real time because context switches aren't free, and IO is generally pretty slow as you have to leave the CPU dye.
Step 2) OS
Most your POSIX (and windows) OS's today are Time-Shared not Real-Time OS. Which means when you tell the OS to do something. Like say sleep for 4ms, it generally will, but not always. It doesn't force this behavior. Your 4ms sleep with typically be between 4-12ms depending on scheduler load.
So this means if you have say a 50ms heart beat clock. Every 50ms you send a packet out to other servers on the network. You take your current time stamp, which is imperfect (see step 1), and then realize that calling for this time stamp is blocking operation, so your 50ms time stamp is only accurate to within +/- 4-12ms depending.
This is before it even hits the network and guess what!
Step 3) Networks
Modern networks (and I say modern as in anything past 1970) is packet switched. What this mean is information is broken into small chucks, and each chunk is transmitted and re-assembled in the correct order. A packet switched network doesn't make timing guarantees. As an eventually that has to be prepared is message part 2/3 will arrive before message part 1/3. This means you can't complete that packet until the entire message arrives.
In order to circumvent these problems a lot of smart people invented things like the Nagle Algorithm which will dynamically delay network traffic so that it attempts to prevent these issues and cut down network congestion caused by dozens of incredibly small packets all going to same server, so just combine them all. But this creates its own problems because your network stack may randomly (from the programmer PoV) decide to start sticking 1-2 packets together if they're small enough.
And then there is network scheduling. What if your ready to say something but there are already ethernet frames ready to be sent? Well your fucked, gotta wait!
What this boil down too? typically 2 computers that even 1 hop away from each other can't keep their clocks in sync. So how do we keep them in sync? Well you don't. Modern CS aims for eventual consistency (Google CAP theorem). Google keeps their clocks synced by installing a cesium based atomic clock in each rack. Locally my company sample a high speed reference (>100k) clock signal generated by an external dedicated signal generated (that is finely calibrated). Individual counters on each computer count the peaks.
Most your smart synched clocks are really cheap internal timers that drift majorly for no good reason. Every so often they get a pulse that updates their time, and they attempt to sync to this, then start drifting again. This is generally fine for human based events, like meetings, bathroom breaks. 5-10 minutes is an acceptable margin of error for humans.
1-2ms is completely unacceptable for calibration systems :\