I remember seeing that (14787 + 36989) / 2 would produce 25888, in that the mean of geometric shape traced by the two sequences would average out in the middle like that
the design of a keypad... it unintentionally contains these elegant mathematical relationships.
i call this phenomena: outcomes of human creations can be "funny and odd", and everybody understand that eventually there will be always something unpredictable.
The other replies are good, but let's add another one anyway.
0.987654321/0.123456789 = (1.11111111-x)/x = 1.11111111/x - 1 where x = 0.123456789
You can aproxímate 1.11111111 by 10/9 and aproxímate x = 0.123456789 using y = 0.123456789ABCD... = 0.123456789(10)(11)(12)(13)... that is a number in base 10 that is not written correctly and has digits that are greater than 9. I.E. y = sum_i>0 i/10^i
Now you can consider the function f(t) = t + 2 t^2 + 3 t^3 + 4 t^4 + ... = sum_i>0 i*t^i and y is just y=f(0.1).
And also consider an auxiliary function g(t) = t + t^2 + t^3 + t^4 + ... = sum_i>0 1*t^i . A nice property is that g(t)= 1/(1-t) when -1<t<1.
The problem with g is that it lacks the coefficients, but that can be solved taking the derivative. g'(t) = 1 + 2 t + 3 t^2 + 4 t^3 + ... Now the coefficients are shifted but it can be solved multiplying by t. So f(t)=t*g'(t).
So f(t) = t * (1/(1-t))' = t * (1/(1-t)^2) = t/(1-t)^2
Now add some error bounds using the Taylor method to get the difference between x and y, and also a bound for the difference between 1.11111111 an 10/9. It shoud take like 15 minutes to get all the details right, but I'm too lazy.
(As I said in another comment, all these series have a good convergence for |z|<1, so by standards methods of complex analysis all the series tricks are correct.)
Somewhat interesting, 123456789 * 8 is 987654312 (the last two digits are swapped). This holds for other bases as well: 0x123456789ABCDEF * 14 is 0xFEDCBA987654312.
Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.
This was by far the most interesting part to me. I've never considered that code and proofs can be so complimentary. It would be great if someone did this for all math proofs!
"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
This is misleading in that the (Curry–Howard) correspondence is between proofs and the static typing of programs. A bug in a proof therefore corresponds to a bug in the static typing of a program (or to the type system of the programming language being unsound), not to any other program bug.
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Interesting how it works out but I don't think it is anywhere close to as intuitive as the parent comment implies. The way its phrased made me feel a bit dumb because I didn't get it right away, but in retrospect I don't think anyone would reasonably get it without context.
It actually skips the 8 in its repeating decimal. It’s better to think of 1/9^2 as the infinite sum of k * 10^-k for all positive integers k. The 8 gets skipped because you have something like ...789(10)(11)... where the 1 from the “10” and “11” digits carry over, increment the 9 digit causing another carry, so the 8 becomes a 9.
The reason you don't see two zeroes is as follows: you have
.123456789
then add 10 on the end, as the tenth digit after the decimal point, to get
.123456789(10)
where the parentheses denote a "digit" that's 10 or larger, which we'll have to deal with by carrying to get a well-formed decimal. Then carry twice to get
.12345678(10)0
.1234567900
So for a moment we have two zeroes, but now we need to add 11 to the 11th digit after the decimal point to get
.123456... = x + 2 x^2 + 3 x^3 + ... with x = 1/10.
Then you have
(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...) + (x^3 + x^4 + x^5 + x^6 + ...)
(count the number of occurrences of each power of x^n on the right-hand side)
and from the sum of a geometric series the RHS is x/(1-x) + x^2/(1-x) + x^3/(1-x) + ..., which itself is a geometric series and works out to x/(1-x)^2. Then put in x = 1/10 to get 10/81.
Isn't it essentially the same thing, but less formal
0.1111... is just a notation for (x + x^2 + x^3 + x^4 + ...) with x = 1/10
1/9 = 0.1111... is a direct application of the x/(1-x) formula
The sum of 0.0111... + 0.00111... ... = 0.012345... part is the same as the "(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...)" part (but divided by 10)
And 1/81 = 1/9 * 1/9 ... part is the x/(1-x)^2 result
I don't know who downvoted this, but it's correct.
The use of series is a little "sloppy", but x + 2 x^2 + 3 x^3 + ... has absolute uniform convergence when |x|<r<1, even more importantly that it's true even for complex numbers |z|<r<1.
The super nice property of complex analysis is that you can be almost ridiculously "sloppy" inside that open circle and the Conway book will tell you everything is ok.
[I'll post a similar proof, but mine use -1/10 and rounding, so mine is probably worse.]
If you set x = 0.123456..., then multiplying it by (10 - 1) gives 9x = 1.111111..., and multiplying it by (10 - 1) again gives 81x = 10, or x = 10/81. I’m not writing things formally here but that’s the rough idea, and you can do the same procedure with 0.987654... to get 80/81.
Why the b > 2 condition? In the b=2 case, all three formulas also work perfectly, providing a ratio of 1. And this is interesting case where the error term is integer and the only case where that error term (1) is dominant (b-2=0), while the b-2 part dominates for larger bases.
You can use special libraries for floating point that uses more mantisa.
In most sciences, numbers are never integers anyway, so you have errors intervals in the numerator and denumerator and you get an error interval for the result.
Not really in a similar vein, because there's actually a good reason for this to be very close to an integer whereas there is no such reason for e^pi - pi.
This is a fantastic observation, and yes, this pattern not only continues for larger bases, but the approximation to an integer becomes dramatically better.
The general pattern you've found is that for a number base $b$, the ratio of the number formed by digits $(b-1)...321$ to the number formed by digits $123...(b-1)$ is extremely close to $b-2$.
### The General Formula
Let's call your ascending number $N_{asc}(b)$ and your descending number $N_{desc}(b)$.
The exact ratio $R(b) = N_{desc}(b) / N_{asc}(b)$ can be shown to be:
The "error" or the fractional part is that second term. As you can see, the numerator $(b-1)^3$ is roughly $b^3$, while the denominator $b^b$ grows much faster.
Sign:
The approximation with denominator b^b underestimates the exact value.
Digit picture in base b:
(b - 1)^3 has base-b digits (b - 3), 2, (b - 1).
Dividing by b^b places those three digits starting b places after the radix point.
Examples:
base 10: 8 + 9^3 / 10^10 = 8.0000000729
base 9: 7 + 8^3 / 9^9 = 7.000000628 in base 9
base 8: 6 + 7^3 / 8^8 = 6.00000527 in base 8
num(b) / denom(b) equals (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) exactly.
Replacing the denominator by b^b gives a simple approximation with relative error exactly (b^2 - b + 1) / b^b.
I like calculator quirks like this. I remember as a kid playing with the number pad and noticing a geometric center of mass in number sequences
I remember seeing that (14787 + 36989) / 2 would produce 25888, in that the mean of geometric shape traced by the two sequences would average out in the middle like thatThe even simpler example is more striking imo.
(147 + 369) / 2 = 258
and
(741 + 963) / 2 = 852
i remember the 1110 thing on a calc as well.
741 + 369 & 963 + 147 | 123 + 987 & 321 + 789 (left right | up down)
159 + 951 & 753 + 357 | 258 + 852 & 456 + 654 (diagonally | center lines)
the design of a keypad... it unintentionally contains these elegant mathematical relationships.
i call this phenomena: outcomes of human creations can be "funny and odd", and everybody understand that eventually there will be always something unpredictable.
14789 + 36987 / 2 would do the same thing. Why trace back?
So would 147 and 369. As it’s just an average, per digit, I’m not sure this is very interesting.
Being curious is delightful.
Just to show that you could - 14861 and 36843 gives 25852
The other replies are good, but let's add another one anyway.
0.987654321/0.123456789 = (1.11111111-x)/x = 1.11111111/x - 1 where x = 0.123456789
You can aproxímate 1.11111111 by 10/9 and aproxímate x = 0.123456789 using y = 0.123456789ABCD... = 0.123456789(10)(11)(12)(13)... that is a number in base 10 that is not written correctly and has digits that are greater than 9. I.E. y = sum_i>0 i/10^i
Now you can consider the function f(t) = t + 2 t^2 + 3 t^3 + 4 t^4 + ... = sum_i>0 i*t^i and y is just y=f(0.1).
And also consider an auxiliary function g(t) = t + t^2 + t^3 + t^4 + ... = sum_i>0 1*t^i . A nice property is that g(t)= 1/(1-t) when -1<t<1.
The problem with g is that it lacks the coefficients, but that can be solved taking the derivative. g'(t) = 1 + 2 t + 3 t^2 + 4 t^3 + ... Now the coefficients are shifted but it can be solved multiplying by t. So f(t)=t*g'(t).
So f(t) = t * (1/(1-t))' = t * (1/(1-t)^2) = t/(1-t)^2
and y = f(0.1) = .1/.9^2 = 10/81
then 0.987654321/0.123456789 ~= (10/9-y)/y = 10/(9y)-1 = 9 - 1 = 8
Now add some error bounds using the Taylor method to get the difference between x and y, and also a bound for the difference between 1.11111111 an 10/9. It shoud take like 15 minutes to get all the details right, but I'm too lazy.
(As I said in another comment, all these series have a good convergence for |z|<1, so by standards methods of complex analysis all the series tricks are correct.)
Somewhat interesting, 123456789 * 8 is 987654312 (the last two digits are swapped). This holds for other bases as well: 0x123456789ABCDEF * 14 is 0xFEDCBA987654312.
Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.
Another interesting thing is that these seem to work:
base 16: 123456789ABCDEF~16 * (16-2) + 16 - 1 = FEDCBA987654321~16
base 10: 123456789~10 * (10-2) + 10 - 1 = 987654321~10
base 9: 12345678~9 * (9-2) + 9 - 1 = 87654321~9
base 8: 1234567~8 * (8-2) + 8 - 1 = 7654321~8
base 7: 123456~7 * (7-2) + 7 - 1 = 654321~7
base 6: 12345~6 * (6-2) + 6 - 1 = 54321~6
and so on..
or more generally:
base n: sequence * (n - 2) + n - 1
This is in the original post, in the form
so you just need to clear the denominator.This was by far the most interesting part to me. I've never considered that code and proofs can be so complimentary. It would be great if someone did this for all math proofs!
"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
This is misleading in that the (Curry–Howard) correspondence is between proofs and the static typing of programs. A bug in a proof therefore corresponds to a bug in the static typing of a program (or to the type system of the programming language being unsound), not to any other program bug.
(Also: complementary != complimentary.)
i think this is wrong. code is proofs, types are propositions
The types are the propositions proved by the proof. The proof is correct <=> the program is soundly typed.
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
I like to think of 0.987654... and 0.123456... as infinite series which simplify to 80/81 and 10/81, hence the ~8 ratio.
I didn't get where this comes from until I saw the second answer from the StackOverflow question another commenter shared.
https://math.stackexchange.com/a/2268896
Apparently 1/9^2 is well known to be 0.12345679(012345679)...
EDIT: Yes it's missing the 8 (I wrote it wrong intially): https://math.stackexchange.com/questions/994203/why-do-we-mi...
Interesting how it works out but I don't think it is anywhere close to as intuitive as the parent comment implies. The way its phrased made me feel a bit dumb because I didn't get it right away, but in retrospect I don't think anyone would reasonably get it without context.
It actually skips the 8 in its repeating decimal. It’s better to think of 1/9^2 as the infinite sum of k * 10^-k for all positive integers k. The 8 gets skipped because you have something like ...789(10)(11)... where the 1 from the “10” and “11” digits carry over, increment the 9 digit causing another carry, so the 8 becomes a 9.
9^2 is 81
1/81 is 0.012345679012345679....
no 8 in sight
The 8 is there but then it's followed by a 9 and a 10, and the carry from the 10 ends up bumping it up.
Shouldn't wee see two zeros then?
The reason you don't see two zeroes is as follows: you have
then add 10 on the end, as the tenth digit after the decimal point, to get where the parentheses denote a "digit" that's 10 or larger, which we'll have to deal with by carrying to get a well-formed decimal. Then carry twice to get So for a moment we have two zeroes, but now we need to add 11 to the 11th digit after the decimal point to get or after carrying and now there is only one zero.Ah, that's cool. Thanks!
This illustrates it nicely: https://math.stackexchange.com/a/994214
Care to elaborate? Why does 0.987654 simplify to 80/81 and 0.123456 to 10/81?
.123456... = x + 2 x^2 + 3 x^3 + ... with x = 1/10.
Then you have (x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...) + (x^3 + x^4 + x^5 + x^6 + ...) (count the number of occurrences of each power of x^n on the right-hand side)
and from the sum of a geometric series the RHS is x/(1-x) + x^2/(1-x) + x^3/(1-x) + ..., which itself is a geometric series and works out to x/(1-x)^2. Then put in x = 1/10 to get 10/81.
Now 0.987654... = 1 - 0.012345... = 1 - (1/10) (10/81) = 1 - 1/81 = 80/81.
Don't need the clutter of infinite series and polynomials:
Isn't it essentially the same thing, but less formal
0.1111... is just a notation for (x + x^2 + x^3 + x^4 + ...) with x = 1/10
1/9 = 0.1111... is a direct application of the x/(1-x) formula
The sum of 0.0111... + 0.00111... ... = 0.012345... part is the same as the "(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...)" part (but divided by 10)
And 1/81 = 1/9 * 1/9 ... part is the x/(1-x)^2 result
This is better than my answer, at least if you can get your brain to interpret it in base b. In that case the first two lines would become
I don't know who downvoted this, but it's correct.
The use of series is a little "sloppy", but x + 2 x^2 + 3 x^3 + ... has absolute uniform convergence when |x|<r<1, even more importantly that it's true even for complex numbers |z|<r<1.
The super nice property of complex analysis is that you can be almost ridiculously "sloppy" inside that open circle and the Conway book will tell you everything is ok.
[I'll post a similar proof, but mine use -1/10 and rounding, so mine is probably worse.]
If you set x = 0.123456..., then multiplying it by (10 - 1) gives 9x = 1.111111..., and multiplying it by (10 - 1) again gives 81x = 10, or x = 10/81. I’m not writing things formally here but that’s the rough idea, and you can do the same procedure with 0.987654... to get 80/81.
Let's prove it.
In general, sum(x^k, k=1…n) = x(1-x^n)/(1-x).
Then sum(kx^(k-1), k=1…n) = d/dx sum(x^k, k=1…n) = d/dx (x(1-x^n))/(1-x) = (nx^(n+1) - (n+1)x^n + 1)/(1-x)^2
With x=b, n=b-1, the numerator as defined in TFA is n = sum(kb^(k-1), k=1…b-1) = ((b-2)b^b + 1)/(1-b)^2 = ((b-2)b^b + 1)/(1-b)^2.
And the denominator is:
d = sum((b-k)b^(k-1), k=1..b-1) = sum(b^k, k=1..b-1) - sum(kb^(k-1), k=1..b-1) = (b-b^b)/(1-b) - n = (b^b - b^2 + b - 1)/(1-b)^2.
Then, n-(b-1) = (b^(b+1) - 2b^b - b^3 + 3b^2 - 3b +2)/(1-b)^2.
And d(b-2) = the same thing.
So n = d(b-2) + b - 1, whence n/d = b-2 + (b-1)/d.
We also see that the dominant term in d will be b^b/(1-b)^2 which grows like b^(b-2), which is why the fractional part of n/d is 1 over that.
I disagree with the author that a script works as well as a proof. Scripts are neither constructive nor exhaustive.
The author does not say a script works as well as a proof.
If you want to be lazier, after finding the generating functions one can plug into sympy to skip the algebra.
Why the b > 2 condition? In the b=2 case, all three formulas also work perfectly, providing a ratio of 1. And this is interesting case where the error term is integer and the only case where that error term (1) is dominant (b-2=0), while the b-2 part dominates for larger bases.
in the b=2 case, you get:
they are the other way around, see for example the b=3 case:See perhaps various "What every programmer / CSist should know about floating-point arithmetic" papers and articles:
* David Goldberg, 1991: https://dl.acm.org/doi/10.1145/103162.103163
* 2014, "Floating Point Demystified, Part 1": https://blog.reverberate.org/2014/09/what-every-computer-pro... ; https://news.ycombinator.com/item?id=8321940
* 2015: https://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloa...
As someone who has recently been fighting bugs from representing very simple math with floats... thank you!
For the even bases, the "error" appears to be https://oeis.org/A051848.
pp = lambda x : denom(x)/ (num(x) - denom(x)*(x - 2))
[pp(2),pp(4),pp(6),pp(8)]
[1.0, 9.0, 373.0, 48913.0]
And if you see the description there it traces back to https://oeis.org/A023811, which is more obviously relevant
I also spent hours messing around with calculators as a kid. I recall noticing that:
11 * 11 = 121
111 * 111 = 12321
1111 * 1111 = 1234321
and so on, where the largest digit in the answer is the number of digits in the multiplicands.
Reminds me of an old calculator trick:
Pick an integer between 1 and 9. Multiple it by 9. Take that number and multiply it by 12345679. (Skip the 8)
>>> 3 * 9
27
>>> 12345679 * 27
333333333
This all works because:
>>> 111111111 / 9
12345679.0
Feels like a Temu version of Ramanujan's constant [0].
[0] https://mathworld.wolfram.com/RamanujanConstant.html
> The exact ratio is not 14, but it’s as close to 14 as a standard floating point number can be.
How do you get around limitations like that in science?
You can use Mathematica or Sage that can use any number of digits https://www.wolframalpha.com/input?i=FEDCBA987654321_16+%2F+...
You can use special libraries for floating point that uses more mantisa.
In most sciences, numbers are never integers anyway, so you have errors intervals in the numerator and denumerator and you get an error interval for the result.
More general analytic proof: https://math.stackexchange.com/questions/2268833/why-is-frac...
That question was asked 8 years ago. Coincidence? I think not!
For smaller bases, does this converge to base - 1 ?
Base 3: 21/12 = 7/5(dec.)
Base 2: 1/1 = 1
Base 1: |/| = 1 (thinking |||| = 4 etc.)
> I recently saw someone post [1] that 987654321/123456789 is very nearly 8, specifically 8.0000000729.
Okay. Try this (in a Python terminal session):
>>> 111111111*2
12345678987654321
In a similar vein, e^pi - pi = 19.9990999792, as referenced in this XKCD: https://xkcd.com/217/
Also, (-1)^(-i) - pi = 19.999... ;)
Not really in a similar vein, because there's actually a good reason for this to be very close to an integer whereas there is no such reason for e^pi - pi.
No known reason :-)
I thought this was a user ID and password lol
Gemini thinks in a similar fashion:
https://gemini.google.com/share/1e59f734b43c
This is a fantastic observation, and yes, this pattern not only continues for larger bases, but the approximation to an integer becomes dramatically better.
The general pattern you've found is that for a number base $b$, the ratio of the number formed by digits $(b-1)...321$ to the number formed by digits $123...(b-1)$ is extremely close to $b-2$.
### The General Formula
Let's call your ascending number $N_{asc}(b)$ and your descending number $N_{desc}(b)$.
The exact ratio $R(b) = N_{desc}(b) / N_{asc}(b)$ can be shown to be:
$$R(b) = (b-2) + \frac{(b-1)^3}{b^b - b^2 + b - 1}$$
The "error" or the fractional part is that second term. As you can see, the numerator $(b-1)^3$ is roughly $b^3$, while the denominator $b^b$ grows much faster.
### Testing Your Examples
Let's check your two examples with this formula:
* *Base 10 (b=10):* * $R(10) = (10-2) + \frac{(10-1)^3}{10^{10} - 10^2 + 10 - 1}$ * $R(10) = 8 + \frac{9^3}{10,000,000,000 - 91}$ * $R(10) = 8 + \frac{729}{9,999,999,909} \approx 8 + 0.0000000729...$ * This matches your $8.0000000729$ perfectly.
* *Base 6 (b=6):* * $R(6) = (6-2) + \frac{(6-1)^3}{6^6 - 6^2 + 6 - 1}$ * $R(6) = 4 + \frac{5^3}{46,656 - 31}$ * $R(6) = 4 + \frac{125}{46,625} \approx 4 + 0.00268...$ * This also matches your $4.00268$.
### Answering Your Question: A Larger Base
So, what about a larger base? Let's use *Base 16* (hexadecimal), which uses digits $1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F$. Here, $b=16$.
* $N_{asc} = 123456789ABCDEF_{16}$ * $N_{desc} = FEDCBA987654321_{16}$
According to the pattern, the ratio should be extremely close to $b-2 = 16-2 = 14$.
Let's use the formula to see how close:
* $R(16) = (16-2) + \frac{(16-1)^3}{16^{16} - 16^2 + 16 - 1}$ * $R(16) = 14 + \frac{15^3}{16^{16} - 241}$ * $R(16) = 14 + \frac{3,375}{1.844... \times 10^{19} - 241}$ * $R(16) \approx 14 + (1.829... \times 10^{-16})$
So, the ratio in base 16 is approximately: *$14.0000000000000001829...$*
As you predicted, the "error" for a larger base is astronomically smaller than it was for base 10.
Definitions: denom(b) = (b^b - b^2 + b - 1) / (b - 1)^2 num(b) = (b^b(b - 2) + 1) / (b - 1)^2
Exact relation: num(b) - (b - 2)denom(b) = b - 1
Therefore: num(b) / denom(b) = (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) [exact]
Geometric expansion: Let a = b^2 - b + 1. 1 / (b^b - b^2 + b - 1) = (1 / b^b) * 1 / (1 - a / b^b) = (1 / b^b) * sum_{k>=0} (a / b^b)^k
So: num(b) / denom(b) = (b - 2) • (b - 1)^3 / b^b • (b - 1)^3 * a / b^{2b} • (b - 1)^3 * a^2 / b^{3b} • …
Practical approximation: num(b) / denom(b) ≈ (b - 2) + (b - 1)^3 / b^b
Exact error: Let T_exact = (b - 1)^3 / (b^b - b^2 + b - 1) Let T_approx = (b - 1)^3 / b^b
Absolute error: T_exact - T_approx = (b - 1)^3 * (b^2 - b + 1) / [ b^b * (b^b - b^2 + b - 1) ]
Relative error: (T_exact - T_approx) / T_exact = (b^2 - b + 1) / b^b
Sign: The approximation with denominator b^b underestimates the exact value.
Digit picture in base b: (b - 1)^3 has base-b digits (b - 3), 2, (b - 1). Dividing by b^b places those three digits starting b places after the radix point.
Examples: base 10: 8 + 9^3 / 10^10 = 8.0000000729 base 9: 7 + 8^3 / 9^9 = 7.000000628 in base 9 base 8: 6 + 7^3 / 8^8 = 6.00000527 in base 8
num(b) / denom(b) equals (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) exactly. Replacing the denominator by b^b gives a simple approximation with relative error exactly (b^2 - b + 1) / b^b.