There are three normal (= acceptable) cases:
There are a number of very odd case where adjusting is needed. Here are some of them:
The TM is our source for the host time and will make adjustments for current timer delivery lag. The simplistic approach taken by TM is to adjust the host time by the current guest timer delivery lag, meaning that if the guest is behind 1 second with PIT/RTC/++ ticks this should be reflected in the guest wall time as well.
Now, there is any amount of trouble we can cause by changing the time. Most applications probably uses the wall time when they need to measure things. A walltime that is being juggled about every so often, even if just a little bit, could occationally upset these measurements by for instance yielding negative results.
This bottom line here is that the time sync service isn't really supposed to do anything and will try avoid having to do anything when possible.
The implementation uses the latency it takes to query host time as the absolute maximum precision to avoid messing up under timer tick catchup and/or heavy host/guest load. (Rational is that a *lot* of stuff may happen on our way back from ring-3 and TM/VMMDev since we're taking the route thru the inner EM loop with it's force action processing.)
But this latency has to be measured from our perspective, which means it could just as easily come out as 0. (OS/2 and Windows guest only updates the current time when the timer ticks for instance.) The good thing is that this isn't really a problem since we won't ever do anything unless the drift is noticable.
It now boils down to these three (configuration) factors: