The rounding errors you have run into on the Macintosh and IBM PC have been around since the beginning of personal computers (and longer!).
Computers must convert from a floating point decimal number to binary in order for the CPU to do math operations. To make the conversion process perform at a user-tolerable rate, past computer designers came up with "short cut" processes to do the conversion. These short cuts cause slight rounding errors.
In the example you describe, the error is .00000000000000002486.... This can be annoying, but is a good trade-off for the amount of time that is saved when doing math functions.