The same argument might apply to DateUtils.IncDay, DateUtils.IncHour, DateUtils.IncSecond, DateUtils.IncMilliSecond functions.
All in all the inner calculation does
Function IncSecond(const AValue: TDateTime; const ANumberOfSeconds: Int64): TDateTime; begin if AValue>=0 then Result:=AValue+ANumberOfSeconds/SecsPerDay else Result:=IncNegativeTime(Avalue,ANumberOfSeconds/SecsPerDay); MaybeSkipTimeWarp(AValue,Result); end;
If you know the basics of TDateTime thren you don't need IncSeconds and friends.[...]Your summary is correct. However, this simple calculation results in an "inverted" time when the date is negative. This case is covered correctly by the IncXXX functions:
Hi!
If you know the basics of TDateTime thren you don't need IncSeconds and friends.
Type TDateTime = Double;
The Integer part contains the days since 30. December 1899
The fractional part contains the time and is organized this way:
1 hour = 1.0 / 24
1 minute = 1.0 / (24*60)
1 second = 1.0/ (24*60*60)
To increment your DateTime with one second you can do
MyDateTime := MyDateTime + 1/ (24*60*60);
To increment your DateTime with 45 days you can do:
MyDateTime := MyDateTime + 45;
Winni
Pardon me for jumping in, but can anybody comment on why TDateTime isn't an (80-bit) extended on platforms that support it?
Because Delphi did so. And the accuracy of Double is probably sufficient for the intended purpose.
Since a TDateTime should be treated as opaque, is binary compatibility that important?
MarkMLl
There is so much code around since more than 25 years that relies on the internal structure of TDateTime.
If you change this you break a lot of existing code.
Are you honestly trying to tell me Winni that people are doing internal bitwise manipulation of TDateTime?
Are you honestly trying to tell me Winni that people are doing internal bitwise manipulation of TDateTime?
No, but floating point math, like +1.0 or even 1/86400 etc is often done with it.
No, but floating point math, like +1.0 or even 1/86400 etc is often done with it.
Bravo Marcov!
Example:
s := "Please pay the bill until "+DateToStr(now+30);
Many devices now measure distances by the time it takes to bounce a signal off of it. So, if you want to make a better unit, you should at least take those time frames and that resolution into account. And what we have is already a big update over the 18.2 Hz timer tick of the original IBM PC, the default for a long time.
Because Delphi did so. And the accuracy of Double is probably sufficient for the intended purpose.
Hmm. But we're both further from the epoch (1900?) than when Borland selected it (late 90s?) and expecting to be able to use smaller intervals for timeouts etc. as computers speed up (assuming, of course, that that trend continues).
Type: DATE A date and time value. Dates are represented as double-precision numbers, where midnight, January 1, 1900 is 2.0, January 2, 1900 is 3.0, and so on.
The maximal correct date supported by TDateTime values is limited to 12/31/9999 23:59:59:999.
Since a TDateTime should be treated as opaque, is binary compatibility that important?
I need to correct my statement a bit: it's not a Double, because Delphi did so, but because Microsoft did so for Visual Basic and Excel. TDateTime is fully interchangeable with the DATE type used in OLE which is described for the VARIANT struct here (https://docs.microsoft.com/en-us/windows/win32/api/oaidl/ns-oaidl-variant):QuoteType: DATE A date and time value. Dates are represented as double-precision numbers, where midnight, January 1, 1900 is 2.0, January 2, 1900 is 3.0, and so on.
That's straight floating point maths, absolutely nothing to do with the number of bits in the representation.
But having the number of fractional bits nibbled away the further the current data moves from the epoch is hardly a good idea.
Double is the highest shared floating point number. But maybe if you implement soft float for 128-bt float, that can be used. (extended is not portable enough to really be a solution)
I did have to decode floating point stuff from an HP instrument a few months ago. Great company, but they also had the big-company "not-invented-here" syndrome with massive internal incompatibility between equipment. Still better than Tektronix, where if you wanted to capture output you had to OCR an Epson-format bitmap.
Many devices now measure distances by the time it takes to bounce a signal off of it. So, if you want to make a better unit, you should at least take those time frames and that resolution into account. And what we have is already a big update over the 18.2 Hz timer tick of the original IBM PC, the default for a long time.
If I wanted to "make a better unit" I'd use the longstanding IBM mainframe convention of an integer where the LSB represents some unreasonably small timescale (e.g. 1 nSec) with the lowest bit actually guaranteed to increment (e.g. every 1024 x 1024 nSec) being implementation-defined.
However I'm /not/ out to make a better unit, I'm just interested in why extended type isn't being used for what it supposedly an opaque floating point type. Surely nobody in their right mind bit-twiddles floating point numbers?
I note Marco's comment elsewhere yesterday that there's always twits who write in assembler expecting it to be portable. But even there floating point manipulation was done either by hardware or by predefined libraries.
MarkMLl
I did have to decode floating point stuff from an HP instrument a few months ago. Great company, but they also had the big-company "not-invented-here" syndrome with massive internal incompatibility between equipment. Still better than Tektronix, where if you wanted to capture output you had to OCR an Epson-format bitmap.
128-bit is not not invented here. To my knowledge hardware sits in this room that has hardware 128-bit FP.
Tektronix I know mostly from large, card based multi channel scopes.
Extended only works on 32-bit x86 and x86_64 *nix systems (status on win64 is murky)
Ah, well, I don't know about that (https://en.wikipedia.org/wiki/Fast_inverse_square_root)...
128 bit FP is a quite common format for vector units. Many game computers, like the XBox 360 and PS/3 have those.
AMD64, not so much. Although you could use fixed-point calculations with the SIMD units.
Maybe. 64-bit,64-bit pairs or so.
Double is the highest shared floating point number. But maybe if you implement soft float for 128-bt float, that can be used. (extended is not portable enough to really be a solution)
My previous point stands: nobody in their right mind gets involved with bit-twiddling floating points :-)
Well, we could definitely use such people, because for FPC we need software 128-bit FP support so that we can fully support cross compilation from platforms that don't support Extended (including x86_64-win64) to those that do. And while basic routines are already available functions like exp and ln as well as the trigonometry ones are missing...
Double is the highest shared floating point number. But maybe if you implement soft float for 128-bt float, that can be used. (extended is not portable enough to really be a solution)
My previous point stands: nobody in their right mind gets involved with bit-twiddling floating points :-)
Well, we could definitely use such people, because for FPC we need software 128-bit FP support so that we can fully support cross compilation from platforms that don't support Extended (including x86_64-win64) to those that do. And while basic routines are already available functions like exp and ln as well as the trigonometry ones are missing...
There still isn't native support for BigInt either, so a general library for arbitrary precision fixed and floating-point would be awesome. I started both a few times and translated one halfway from Delphi, but it was too much work for what I needed it for. The last time I wrote a working floating-point library was before floating-pont hardware was common. So, long ago :)
Neither a BigInt library nor support for arbitrary precision fixed and floating point arithmetic is important for cross compilation. Support for 128-bit floating point however is.
I'd not presume to argue with that bit as a reality check: am I correct in believing that only the POWER architecture support this in hardware (possibly with RISCV in the future)? And however much I've favoured alternative architectures in the past, is that one really likely to remain viable?
Neither a BigInt library nor support for arbitrary precision fixed and floating point arithmetic is important for cross compilation. Support for 128-bit floating point however is.
I'd not presume to argue with that bit as a reality check: am I correct in believing that only the POWER architecture support this in hardware (possibly with RISCV in the future)? And however much I've favoured alternative architectures in the past, is that one really likely to remain viable?
Double is the highest shared floating point number. But maybe if you implement soft float for 128-bit float, that can be used. (extended is not portable enough to really be a solution)
My previous point stands: nobody in their right mind gets involved with bit-twiddling floating points :-)
Well, we could definitely use such people, because for FPC we need software 128-bit FP support so that we can fully support cross compilation from platforms that don't support Extended (including x86_64-win64) to those that do. And while basic routines are already available functions like exp and ln as well as the trigonometry ones are missing...
"software 128-bit FP support" is a very generic term. Are the exact requirements noted down somewhere? Depending on these the task can vary from several man days to man years ...
* full implementation in FPC, or wrapper for an existing FP128 library?
* 32/64 bit and little-/big-endian architecture support?
* based on IEEE754?
* exposure of an internal, more efficient format too?
* designed to allow replacement of some kernel functions by asm routines?
* how much trade is allowed between exactness and speed?
* ...
"software 128-bit FP support" is a very generic term. Are the exact requirements noted down somewhere? Depending on these the task can vary from several man days to man years ...
* full implementation in FPC, or wrapper for an existing FP128 library?
* 32/64 bit and little-/big-endian architecture support?
* based on IEEE754?
* exposure of an internal, more efficient format too?
* designed to allow replacement of some kernel functions by asm routines?
* how much trade is allowed between exactness and speed?
* ...
We already have the core routines in rtl/inc/softfpu.pp (which is based on IEEE754; usable using the units sfpu128 and ufloat128) as such any implementation of the additional functions must work together with those. External libraries are a no-go, the license must be compatible to the RTL's (LGPL with static linking exception) and due to this being especially necessary for cross compilation a full Pascal implementation that works correctly in (16/)32/64-bit in both big and little endian is a necessity. As a first step exactness is more important than speed.