Finally, I admit to being extremely uncomfortable with a query which compares timestamps stored on the server with a value injected from a client system... or vice-versa. My preference is for the server to never attempt to apply DST corrections, and for tables to contain both server- and client-generated timestamps with the appropriate ones selected according to the requirements of an operation.
Great advice,thank you.
To expand somewhat on that. In principle, a server- particularly a unix-based server- would be expected to have its hardware clock set to either UTC or to a fixed (and known) offset from UTC, with any file operations timestamped UTC and a database able to save UTC timestamps and use UTC for interval calculation. Things can still get pretty hairy, however, when it attempts to present the date and time associated with a file with a "correction" for local timezone and DST ("summer time") offset.
I've learnt the hard way to be paranoid about this.
I once had a situation where I had a number of data files bearing locally-generated timestamps (UK) representing data that had been collected from various sites in the UK and mainland Europe on a date where DST applied to some and not to others. I ended up with something like five or six distinct sets of times, depending on whether a file had been timestamped (after its arrival in the UK) with or without DST, whether the site originating the data was in a jurisdiction currently applying DST, and the precise timezone rules that the database server was applying. And I had to work out what was going on with unsympathetic senior management breathing down my neck.
After that I was extremely careful to ensure that any system that either generated or displayed timestamps had its hardware clock set to UTC and kept it that way, i.e. with no DST correction.
I was also careful to ensure that any timestamp information that arrived was saved in its original form on a record-by-record basis, even if it was known to originate from a 146818 chip (embedded system geeks will recognise that number and shudder), as well as logging the timestamp of any intermediate file (which /should/ be UTC) and the timestamp of the database's own insertion operation (which again, /should/ be UTC... if a database can't get this unambiguously there's something badly wrong).
Now I set that up 20 years or so ago, and it's still ticking along. But that's 20 years further away from the epoch, which means that any timestamp that uses a floating-point representation is obliged to put more bits into the integer part hence has less precision for the fractional part. And that's another good reason for avoiding TDateTime and Now() in time calculations.
You also have to take into account that different computers have internal clocks drifting at different rates, and will compensate differently when they receive a time update from the Internet... or from a local (NTP etc.) server, which might have its own issues. And don't get me going on GPS...
Ite, missa est
Pax vobiscum.
MarkMLl