By the prime counting function, there are about (2^4096)/ln(2^4096), or close enough to 2^4085 prime numbers under 2^4096, which is close enough to 10^1360 to not sweat the piddly factors that may be off by.
I'd tell you to "go ahead and start computing that and tell me when you're done", however, I like the universe I live in, and the entire information content of the boundary of the observable universe is something like 4*10^122 bits [1]. So you're talking about creating a black hole vastly, vastly, vastly, 10-to-the-power-of-thousand+ times larger than the observable universe, which some of your fellow universe residents may find to be a hostile act and we probably aren't going to let you finish.
While you can define such a table as having "O(1)" lookup properties in the sense that on average the vast, vast, vast, vast, vast, dwarfing-the-observable-universe-by-hundreds-of-orders-of-magnitude light years you'd have to travel for the answer to a given query can be considered "O(1)" since it's on average pretty much the same for all lookups, it's constant in a rather useless sense.
The idea is to do it once. A lot can likely be compressed, while keeping access time log-linear. Store it passively, so that it won't need power except for reading, like mask ROM or CD-ROM do not.
Using a lookup table to factor numbers is a faster algorithm and has complexity O(1).
By the prime counting function, there are about (2^4096)/ln(2^4096), or close enough to 2^4085 prime numbers under 2^4096, which is close enough to 10^1360 to not sweat the piddly factors that may be off by.
I'd tell you to "go ahead and start computing that and tell me when you're done", however, I like the universe I live in, and the entire information content of the boundary of the observable universe is something like 4*10^122 bits [1]. So you're talking about creating a black hole vastly, vastly, vastly, 10-to-the-power-of-thousand+ times larger than the observable universe, which some of your fellow universe residents may find to be a hostile act and we probably aren't going to let you finish.
While you can define such a table as having "O(1)" lookup properties in the sense that on average the vast, vast, vast, vast, vast, dwarfing-the-observable-universe-by-hundreds-of-orders-of-magnitude light years you'd have to travel for the answer to a given query can be considered "O(1)" since it's on average pretty much the same for all lookups, it's constant in a rather useless sense.
[1]: https://math.ucr.edu/home/baez/information.html
Faster than what? Are you factoring in the time to build the lookup table for primes greater than a google?
The idea is to do it once. A lot can likely be compressed, while keeping access time log-linear. Store it passively, so that it won't need power except for reading, like mask ROM or CD-ROM do not.
Faster than exp(((64/9)^1/3 + o(1))*((log n)^1/3 (log log n)^2/3)). The time building the table is not counted as that happens ahead of time.
Ok, where are you going to keep those yottabytes of tables?
Chainsaws? ICMP echo? Tetris? There are a lot of options for storage available that we're really not making good use of at the moment. [1]
[1] http://tom7.org/papers/murphy2022harder.pdf