Apple calculator is not trustworthy if you are a programmer

So I'm posting this here after many months of no response to Apple from a submitted bug report.


If you are a programmer and you use the Apple calculator for big number calculation (say for flipping bits), there is a verifiable bug in the scientific notation to normal conversion. That is, the calculator will give you wrong answers.


I noticed this on Catalina and verified that it is still here for Bug Sur so I don't know how long this bug has been in circulation.


To check/verify:


Startup calculator and enter: 288230376151711743 * 45


It will give you an answer of "1.297036692682703e19"


Switch to programmer mode "cmd 3" and it will convert to normal and report "12970366926827028480". The last 2 digits here are wrong, the answer is "12970366926827028435".


BTW, 288230376151711743 is 2^58-1 and this calculation easily fits in a 64-bit number. If you start in programmer mode, it appears to work ok. My guess is the problem is the scientific notation to normal number conversion.


This introduced bugs into applications that I write and ended up spending a long time trying to figure out what was going on (I mean, when was the last time you suspected the calculator was wrong?).


Just a public service announcement...

MacBook Pro 17″, OS X 10.11

Posted on Feb 20, 2021 7:34 AM

Reply
Question marked as Top-ranking reply

Posted on Feb 20, 2021 6:24 PM

tegtmeye wrote:

So I'm posting this here after many months of no response to Apple from a submitted bug report.

It's going to be a long wait.

If you are a programmer and you use the Apple calculator for big number calculation (say for flipping bits), there is a verifiable bug in the scientific notation to normal conversion. That is, the calculator will give you wrong answers.

The problem here is with floating point vs integer representation.

I noticed this on Catalina and verified that it is still here for Bug Sur so I don't know how long this bug has been in circulation.

Officially since 1985. That is when the IEEE 754 standard was first published.

To check/verify:

Startup calculator and enter: 288230376151711743 * 45

It will give you an answer of "1.297036692682703e19"

But what happens when you select 288230376151711743 and paste it into calculator in scientific mode? That's your hint.

BTW, 288230376151711743 is 2^58-1 and this calculation easily fits in a 64-bit number.

Yes and no. The problem here is that you are switching between scientific format and integer format. It is true that 288230376151711743 fits into a 64 bit integer. You can even multiply and divide that by 45. But floating point is different. Double precision floating point reserves 11 bits for the exponent and 52 bits for the mantissa. With a little fudgery, it actually has 53 significant bits. 53 is less than 58. That's why your experiment fails.


Floating point tries to be a jack of all trades. It gives you high precision or high range, but not both. All values are encoded in scientific format. 1.xyz*e^whatever. And its all base-2, too. But when you have a high exponent, there isn't enough precision to represent all of the digits. You are crossing just over that boundary.


If you are programming, you can use the double double format. Or you could use an arbitrary precision math library. However, modern CPUs have hardware support for most, if not all, math operations. You can get 58 bit precision, but it might be 58 times slower, or maybe 5800 times slower. In most applications, it's not a big deal. Double precision is usually plenty. Single precision, however, is often not enough. Alas, GPU calculations are usually limited to single precision, or even half precision. We still have some room to grow.


So, yes, Apple calculator is trustworthy if you are a programmer. Programmers should know the capabilities and limits of floating point math, when to use it, and when not to.


For a really fascinating real-world example, consider that most date and time calculations are done using floating point arithmetic with the time represented as the number of seconds since the "epoch", which is usually Jan. 1st 1970. If you do the math, double-precision time stamps run out in 2038. Even worse, they start to lose precision some time before that. That's only 17 years from now. It's like 2004 vs. now. I'm just saying that would be really good time to stock up on survival supplies and avoid any technology.

Similar questions

10 replies
Question marked as Top-ranking reply

Feb 20, 2021 6:24 PM in response to tegtmeye

tegtmeye wrote:

So I'm posting this here after many months of no response to Apple from a submitted bug report.

It's going to be a long wait.

If you are a programmer and you use the Apple calculator for big number calculation (say for flipping bits), there is a verifiable bug in the scientific notation to normal conversion. That is, the calculator will give you wrong answers.

The problem here is with floating point vs integer representation.

I noticed this on Catalina and verified that it is still here for Bug Sur so I don't know how long this bug has been in circulation.

Officially since 1985. That is when the IEEE 754 standard was first published.

To check/verify:

Startup calculator and enter: 288230376151711743 * 45

It will give you an answer of "1.297036692682703e19"

But what happens when you select 288230376151711743 and paste it into calculator in scientific mode? That's your hint.

BTW, 288230376151711743 is 2^58-1 and this calculation easily fits in a 64-bit number.

Yes and no. The problem here is that you are switching between scientific format and integer format. It is true that 288230376151711743 fits into a 64 bit integer. You can even multiply and divide that by 45. But floating point is different. Double precision floating point reserves 11 bits for the exponent and 52 bits for the mantissa. With a little fudgery, it actually has 53 significant bits. 53 is less than 58. That's why your experiment fails.


Floating point tries to be a jack of all trades. It gives you high precision or high range, but not both. All values are encoded in scientific format. 1.xyz*e^whatever. And its all base-2, too. But when you have a high exponent, there isn't enough precision to represent all of the digits. You are crossing just over that boundary.


If you are programming, you can use the double double format. Or you could use an arbitrary precision math library. However, modern CPUs have hardware support for most, if not all, math operations. You can get 58 bit precision, but it might be 58 times slower, or maybe 5800 times slower. In most applications, it's not a big deal. Double precision is usually plenty. Single precision, however, is often not enough. Alas, GPU calculations are usually limited to single precision, or even half precision. We still have some room to grow.


So, yes, Apple calculator is trustworthy if you are a programmer. Programmers should know the capabilities and limits of floating point math, when to use it, and when not to.


For a really fascinating real-world example, consider that most date and time calculations are done using floating point arithmetic with the time represented as the number of seconds since the "epoch", which is usually Jan. 1st 1970. If you do the math, double-precision time stamps run out in 2038. Even worse, they start to lose precision some time before that. That's only 17 years from now. It's like 2004 vs. now. I'm just saying that would be really good time to stock up on survival supplies and avoid any technology.

Feb 21, 2021 11:34 AM in response to etresoft

I appreciate you taking the time to answer I believe you are missing the point.


I develop scientific applications for both desktop and embedded platforms for a living so I am fully aware of how floating point numbers work.


If the calculator application is using naked floating point type (aka a 64-bit double) as the backing representation, then whoever wrote it is extraordinarily naive and it is most definitely a bug. Calculators have exact precision rounded to the last presented digit. Said another way, give it any number within the arbitrary range limit of calculator and the answer will (should) be exact rounded to the last presented digit. This as been true since electronic calculators have been around. One way to do this is to explicitly NOT use floating point arithmetic but rather scaled integer arithmetic. Another is to use an arbitrary precision library but even then, you don't use it as a naked backend type. Part of the reason for not using floating point numbers is the limited precision that you state, another is that the distance between any two representable numbers is not uniform. This means that there are numbers that you can physically type with a base 10 calculator with enough input digits (which the Calculator app has) but ultimately gets altered in the floating point representation. This is bad if you don't limit the input range to what is representable. See "What every computer scientist should know about floating point numbers" from Goldberg 1991 (plenty of google results or a good link at http://pages.cs.wisc.edu/~david/courses/cs552/S12/handouts/goldberg-floating-point.pdf)


The point that I am trying to make in my original post is that however the calculator is actually written (maybe floating point, maybe not), you cannot make a quick calculation up to 64-bits (clearly supported) and expect the number reported to be correct. So NO, it is not trustworthy. Stating that giving the wrong answer is actually user error because of a guess on how it is implemented is overly dismissive.


I can demonstrate why there is more going on than your explanation. The largest number representable by a 64-bit IEEE 754 floating type without loss of precision is 2^53 or 9007199254740992. Go ahead and enter it into the calculator, now add 1. Calculator reports for me 9007199254740992 (the same number) because it cannot represent 9007199254740993. Try it again by adding 3. Now it reports 9007199254740996 which is the next largest integer that it can represent (which is your argument). However enter 9007199254740994 and add 1, Calculator reports 7199254740995 so clearly behind the scenes something more sophisticated than just using a 64-bit double is going on. I'm not going to guess what that is, just that 2^58-1 is an integer greater than 2^53 which is obviously representable in the Calculator app but doing arithmetic on it gives the wrong answer.


Lastly, your time example is completely wrong. Time is not done using floating point numbers. Both common forms of representing time as an underlying type are number of integer seconds since the epoch (time_t) and the number of seconds and nanoseconds since the epoch (timespec). Neither are floating point types. See https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/time.h.html and yes, these are the underlying types for MacOS's BSD backend.

Feb 20, 2021 4:53 PM in response to Kurt Lang

Here's a very good and free online calculator. I tested your example and it does give the correct result.


https://keisan.casio.com/calculator


Edit: Also found an excellent free calculator in the App Store that does large values like that, and returns the correct result. Search for Sci:Pro Calc. Unfortunately, there doesn't appear to be a Mac version. iPad or iPhone only.

Feb 20, 2021 5:18 PM in response to Kurt Lang

Thanks for finding an alternative but that wasn't my intent. There are lots of places to find the correct answer. One fee and web-based place is www.wolframalpha.com.


This was really to bring to other readers attention that Apple's Calculator can't be trusted for big numbers. And although I have't confirmed this, I suspect the bug is an Appkit problem and not specific to the Calculator app.

Feb 21, 2021 2:44 PM in response to tegtmeye

tegtmeye wrote:

Calculators have exact precision rounded to the last presented digit. Said another way, give it any number within the arbitrary range limit of calculator and the answer will (should) be exact rounded to the last presented digit. This as been true since electronic calculators have been around.

Hardly.

Stating that giving the wrong answer is actually user error because of a guess on how it is implemented is overly dismissive.

It is precisely dismissive enough.

I can demonstrate why there is more going on than your explanation. The largest number representable by a 64-bit IEEE 754 floating type without loss of precision is 2^53 or 9007199254740992. Go ahead and enter it into the calculator, now add 1. Calculator reports for me 9007199254740992 (the same number) because it cannot represent 9007199254740993. Try it again by adding 3. Now it reports 9007199254740996 which is the next largest integer that it can represent (which is your argument). However enter 9007199254740994 and add 1, Calculator reports 7199254740995 so clearly behind the scenes something more sophisticated than just using a 64-bit double is going on.

Floating point arithmetic can be quite sophisticated at the edges. However, I don't think you are anywhere close to those edges. I followed your instructions above and got a different result. How am I supposed to believe your claims about Calculator's failings when your numbers don't add up?


The easiest way to see what is going on would be to actually write a little C program and print out doubles (as unsigned longs) and see what the bit patterns are. It would look something like this:

9007199254740992 = 20000000000000
9007199254740993 = 20000000000000
9007199254740994 = 20000000000002
9007199254740995 = 20000000000004
9007199254740996 = 20000000000004

What you are probably seeing here (or what you would have seen) is a rounding error. This appears to be mentioned in that paper you cited. But that paper is 30 years old. Modern terminology is a little bit different now. I couldn't remember the term offhand, so I looked at some code I had written and found it - "ties to even". Using that modern term, I found this totally random PowerPoint that is much more accessible and useful in terms of learning about floating point arithmetic instead of just trying to make a point on the internet.

Lastly, your time example is completely wrong. Time is not done using floating point numbers.

Both common forms of representing time as an underlying type are number of integer seconds since the epoch (time_t) and the number of seconds and nanoseconds since the epoch (timespec). Neither are floating point types. See https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/time.h.html

There is a big difference between representations of numbers, which can be very accurate and arbitrarily precise, and calculations on those numbers, which are subject to laws of both physics and economics. Review that page you linked to and look for the only operation that actually does calculations on time values. What data type does it use?

these are the underlying types for MacOS's BSD backend.

macOS doesn't have a "BSD backend". It is a mongrel operating system made from several different systems, including BSD, NeXT, mach, MacOS, iOS, and even a little bit of Windows thrown in for good measure. It does expose a UNIX-compliant API that is generally close enough to BSD to fool most people and most autoconf scripts.


I don't know why you are worrying over this issue for months. It's just a little calculator app.

Feb 21, 2021 6:33 PM in response to etresoft

I do not understand your goal here. You are clearly outside of your skill set and not really adding any value. Is it simply to boost your response count?


Calculators have exact precision rounded to the last presented digit. Said another way, give it any number within the arbitrary range limit of calculator and the answer will (should) be exact rounded to the last presented digit. This as been true since electronic calculators have been around.


Hardly.

This is wrong, again. It wasn't until recent times that calculator microprocessors had floating point units.


It is precisely dismissive enough.

Let me rephrase. It is arrogant.


Floating point arithmetic can be quite sophisticated at the edges. However, I don't think you are anywhere close to those edges. I followed your instructions above and got a different result. How am I supposed to believe your claims about Calculator's failings when your numbers don't add up?


Maybe you are typing it in wrong.


The easiest way to see what is going on would be to actually write a little C program and print out doubles (as unsigned longs) and see what the bit patterns are. It would look something like this...


All this shows is the hardwares implementation of the floating point to integral type cast. But even that is hardware dependent. It is unclear to me what that is supposed to bring to discussion.


This appears to be mentioned in that paper you cited. But that paper is 30 years old...

It was required reading 25 years ago for me in school and I'm pretty sure it still is. I am not trying to make a point, I am trying to educate someone who is outside of their lane. The floating point environment is fairly standardized---the most fundamental parts and their behaviors are part of the C standard. Intel/Arm's implementation of floating point units, the compilers use of those instructions, you or my understanding of it is not what is at issue here. The toolchain's implementation of floating point types are most certainly correct. Not being able to multiply two integers integer whose product is within the supported bit-depth and blaming it on the underlying type is a poor excuse.


As far as your header picture, nice try. This is completely and utterly wrong, again. The time functions are a C kernel interface not written in Swift of Objective-C.


/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/i386/types.h 


Which you can follow from the include chain:


/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/sys/time.h

Also, "time intervals" are not the same as a "time point". Time intervals are the difference between any arbitrary points in time. A time point is relative to the epoch.


There is a big difference between representations of numbers, which can be very accurate and arbitrarily precise, and calculations on those numbers, which are subject to laws of both physics and economics.

Not relevant. So either you are the developer that wrote the Calculator app trying to defend a bad design decision or someone who has no idea how the underlying calculations are taking place and ultimately defending something you absolutely no nothing about. Which is it?


macOS doesn't have a "BSD backend". It is a mongrel operating system made from several different systems, including BSD, NeXT, mach, MacOS, iOS, and even a little bit of Windows thrown in for good measure. It does expose a UNIX-compliant API that is generally close enough to BSD to fool most people and most autoconf scripts.

Red herring. The MacOS kernel is basically BSD. True, lot of things bolted on to it but still basically BSD. Likewise, it isn't just to "fool an autoconf script", it is actually Unix-compliant and branded. That means that has to comply with the required Unix interfaces---including the fact that the system time types are not floating point. Sorry.


https://opensource.apple.com/source/xnu/xnu-7195.81.3/bsd/

https://www.opengroup.org/openbrand/register/xy.htm


I don't know why you are worrying over this issue for months. It's just a little calculator app.

I am not worrying abut this for months. I put a tip on the discussion site so that maybe others wouldn't get caught in the same trap I did. My question, is why you didn't just say "thanks" and leave it at that? You obviously don't program for a living so why do you even care? Your responses are not adding value. You are clearly just googling topics and trying to pass off what you've read as expertise. Please stop.

Feb 21, 2021 7:09 PM in response to tegtmeye

tegtmeye wrote:

My question, is why you didn't just say "thanks" and leave it at that?

And that’s the same question I have for you. You claim to have found bugs in a calculator tool that have somehow affected your own scientific desktop and embedded apps. I explain how floating point arithmetic works, but you don’t want to hear it. You just want to pick fights and insult people.

You obviously don't program for a living so why do you even care?

LOL! If only you knew...

You are clearly just googling topics and trying to pass off what you've read as expertise.

And you’re projecting.

Please stop.

But didn’t you come here complaining that Apple was not responding to your bug report? And now you are asking for responses to stop? Wish granted.

Feb 21, 2021 7:31 PM in response to etresoft

And that’s the same question I have for you. You claim to have found bugs in a calculator tool that have somehow affected your own scientific desktop and embedded apps. I explain how floating point arithmetic works, but you don’t want to hear it. You just want to pick fights and insult people.


Because you appear to be incapable of actually reading what people are stating but instead projecting your "solutions" on things that are not problems.


I did not post to complain about Apple not responding. Again I posted to provide a tip so others won't get caught in the same trap. I believe I stated that at least twice.


You "explaining floating point numbers", again, is not the issue but you didn't want to read that. I know how floating point number work. You jumped to the conclusion that I must not understand how floating point numbers work and felt the need to "educate me". The implementation of the calculator app is flawed. An application putting in two valid integer inputs that reproduce an incorrect output is software testing 101. If the implementation app chose to be naive in its implementation, then it should have restricted its integer input to what it could represent. Anything else is a bug. If I implemented a calculator using an 8-bit number as the underlying type and when people complained that the result overflowed with unrestricted input, I tried to make the argument that it was user error and they didn't understand 8-bit number is a ludicrous as the argument that you are presenting.


I am pretty sure I didn't pick a fight. I am responding to the numerous times you've pointed out incorrect information.


LOL! If only you knew...

Judging by the amount of things you got wrong, I probably don't.


But didn’t you come here complaining that Apple was not responding to your bug report? And now you are asking for responses to stop? Wish granted.

I didn't ask for anyone's help. Again, please READ what people are writing before feeling like to you need to respond and "help".


You are welcome to the last word.

This thread has been closed by the system or the community team. You may vote for any posts you find helpful, or search the Community for additional answers.

Apple calculator is not trustworthy if you are a programmer

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.