Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes the post (hopefully) interesting is that I consider programs for computing huge numbers in non-cheating languages that are not specifically equipped for doing so.
It all goes over my head, but, what does the distribution of values look like? e.g. for unsigned integers its completely flat, for floating point its far too many zeros, and most of the numbers are centered around 0, what do these systems end up looking like?
I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.
As I've replies several times before, we don't allow arbitrary mappings.
We allow computable mappings but consider only obviously non-cheating languages like Turing machines
or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.
I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.
`9↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑9` seems like a reasonable guess (barring encoding cheats/trickery like @masfuerte commented!)
The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.
(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)
To find the largest number that is computable by a program of at most 64 bits in a non-cheating language; i.e. one that's not geared toward producing large numbers.
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.”
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.
Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes the post (hopefully) interesting is that I consider programs for computing huge numbers in non-cheating languages that are not specifically equipped for doing so.
This feels like the computer science version of this article: https://www.scottaaronson.com/writings/bignumbers.html
Whatever largest number you can express in your system, I can represent a larger one in only one bit, using the following specification.
0=your largest number 1=your largest number + 1
To be pedantic, that is a instance of the Berry paradox [1] and no you can not [2] as that would be a violation of Godel's incompleteness theorems.
[1] https://en.wikipedia.org/wiki/Berry_paradox
[2] https://terrytao.wordpress.com/2010/11/02/the-no-self-defeat...
Speaking precisely, your clarification was didactic, not pedantic.
Oh yeah well what about their largest number plus your one plus my infinity?
It all goes over my head, but, what does the distribution of values look like? e.g. for unsigned integers its completely flat, for floating point its far too many zeros, and most of the numbers are centered around 0, what do these systems end up looking like?
I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.
As I've replies several times before, we don't allow arbitrary mappings. We allow computable mappings but consider only obviously non-cheating languages like Turing machines or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.
I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.
What's the biggest up-arrow notation number you can spell with 64 bits?
https://mathworld.wolfram.com/KnuthUp-ArrowNotation.html
`9↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑9` seems like a reasonable guess (barring encoding cheats/trickery like @masfuerte commented!)
Given time, this will output a bigger number, and it is only 48 bits:
That is not a number, that is infinity.
The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.
(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)
[1]: https://esolangs.org/wiki/Malbolge
56 bits, but it's BASIC on a Commodore 64:
bits == entropy.
Everything else is word play.
I can do you one better. I can represent the largest number with a single binary bit.
I can do it in half a bit
Slow down there mr zip file
Can you give a formulation of the problem you are trying to answer?
To find the largest number that is computable by a program of at most 64 bits in a non-cheating language; i.e. one that's not geared toward producing large numbers.
Do you have a mathematical formulation, or?
Ultimately you seem to pick a random definition of computing and size and then work with that?
Once you allow any format the question is completely meaningless. You can just define 0 to mean any number you want.
The post addresses this very issue:
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.”
FWIW, w218 equals
627,421,742,590,461,754
or
0x08B5_0CC0_2B76_073A
in case someone would like to memorize it or something.
Following BLC8's bytewise encoding convention of [1], w218's binary encoding 0100 0101 1010 1000 0110 0110 0000 0001 0101 1011 1011 0000 0011 1001 1101 0 gets padded with 3 arbitrary least significant bits, say 000, and becomes 45A8_6601_5BB0_39C0 in hexadecimal.
[1] https://www.ioccc.org/2012/tromp/
> The largest number (currently known to be) representable in 64 bits is w218
In my representation the bit pattern 00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000001 stands for the number w218+1.
I win!
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.
Sorry; no winning for cheaters:-(