The basic assumption here is, that a binary value is not exact, when it isn't equal to a decimal value. Or, more broadly: you cannot simply translate any floating point number into a different numeral system, with the same accuracy. Obvious example: 1/3 is 0.1 in base 3, or 0.4 in base 12, but it has no accurate representation in base 2 or 10. That's why early computers used BCD to calculate in base 10, because that's the norm.

But it's probably missing the point, as fractions crop up during math in either case. So it's useless trying to keep things "exact". It's far more practical to look at the amount of significant digits you need. And that's where floating-point math excels. In fixed-point math (currency), as shown you lose accuracy fast, while with floating-point the amount of significant figures stays as high as will fit the container size.

In short: for math, always use the largest format floating-point number available, which is extended (x86/AMD64 only), or double (everything else). And only compare your chosen amount of significant figures.

Well, ok, perhaps not if you're calculating 3D geometry. Although Kerbal Space Program doesn't agree