That looks like a wrong approach since both TCP and UDP come with a timestamp when the message was send.
You should only compare times, not use a timer.....you should use now() and the package timestamp.
That will give you the values in milliseconds, on some platforms microseconds.
I'm afraid that I'm not entirely comfortable with that. Now granted that my recent experience doesn't include Windows, but I don't think you'll get timestamps out of the standard (Berkeley sockets) API unless you either explicitly ask for them as an IP (?) option or you use something like PCAP to get timestamped packets. In addition Now() isn't- IMO- a good choice for precision timing since it's a floating point number, and the larger it gets relative to its epoch the less precise it is.
However I agree that OP's requirement... needs refinement. In particular he needs to understand that a modern processor has hardware timers independent of any thread, that these timers are CPU-specific, and that since many systems these days have multiple CPUs and most have multiple cores and/or support for hardware threads they're likely to be oblivious to timer-update interrupts etc.
He also needs to appreciate that most systems have so many hardware and software layers in between their network socket and application code that it's extremely difficult to measure network latency etc. in software, even if one or more of the cooperating systems is known to handle e.g. ping (ICMP echo) responses in hardware or the lowest levels of device driver.
MarkMLl