Floats are weird. Anyways, here’s a basic calculus exercise:
However, I won’t prove that limit here, because this is a post about floats being weird, not a post about calculus. Instead, let’s try to approximate this limit numerically using Python, because what’s the worst that could happen? Let’s write that fraction as a Python function and plug in smaller and smaller .
def f(x):
return (math.exp(x)-1)/x
f(1e-9)
returns 1.000000082740371 and f(1e-12)
returns 1.000088900582341, which is already weird. Shouldn’t the approximation get better with smaller ? However, trouble really strikes with f(1e-15)
. It returns a shocking 1.1102230246251565, which has a more than 10% error!
But, with this one weird trick…
def g(x):
y = math.exp(x)
return (y-1)/math.log(y)
g(1e-9)
returns 1.0000000005, g(1e-12)
returns 1.0000000000005, and g(1e-15)
returns 1.0000000000000004, giving us exactly what we want! 🤯
Wait a minute! This new function is doing redundant computation! Why should we take when we can simply divide by ? How does this reduce the error so effectively?
An assumption
To explain this phenomenon, we need to make a huge simplifying assumption. For some operation like when done using floating-point arithmetic, the final answer won’t be exactly equal to the exact value of . Let’s use to denote the value of this floating-point calculation. Then we will assume that for some small . The upper bound on depends on what kind of floating-point numbers we’re using. Python uses IEEE 754 double-precision floats, which have a with magnitude at most .
You might be wondering why we didn’t choose a different assumption that for some small . This is because floating-point numbers become spaced out farther apart as they get larger, so for large might have a large absolute difference from , but the relative difference of will still be small. Thus, our original assumption is indeed the correct one to make.
We will also assume that this assumption holds for other operations like subtraction, multiplication, exponentiation, and logarithms. Now we have the necessary tools to tackle the problem!
Analyzing f
First, let’s find . Using our assumption repeatedly, we have
You might remember from calculus that , which is a Taylor series. Since we are only plugging in very small values of , and and other higher powers are very, very small numbers so we can simply ignore them. We’ll just say . This is a common trick in math and physics. We now get
Again, since is small, will be absolutely tiny, so we can also ignore it. Thus, the whole thing simplifies to
If we plug in , we get which is pretty similar to the actual error that we got at the beginning! Cool, now we can see the source of the error: is in the denominator, so as it gets smaller, the error grows.
Analyzing g
Next, let’s find . Using our favorite assumption again, we get
This looks pretty scary, but we can use our and approximations to reduce it down to
Now we need one last Taylor series, for . Since this post isn’t about calculus (although it kind of is), I’m just going to give you the Taylor series that we need: (this series doesn’t converge for all , but whatever). The important thing is that and the higher powers are going to be very small, since our that we’re plugging into is . Thus, we can just ignore them and conclude .
Something magical happens when we plug that approximation in. The in the numerator and the denominator cancel out! We’re just left with
Awesome! Now we can clearly see that for small , the resulting error is tiny. You might be wondering why there isn’t an in this answer. When we plugged in , it wasn’t all that close to 1. This is because with , our approximations of and aren’t exactly valid since the squared terms do contribute a bit. Basically, our answer is only true for when is very small and close to .
Intuitively, g
is more accurate than f
because the errors of the in the numerator and denominator end up cancelling each other out, but this is actually a pretty misleading explanation. The only way to see why this happens is to crunch through the analysis.
Anyways, floats are weird.