What kind of "study" would this even be? I thought the point of the infinite monkey thing was to talk about regular distributions and eventualities of every possible string showing up. I don't think anyone claimed that any relatively-long string would show up in any reasonable amount of time necessarily, but it's kind of a bizarre assertion to make.
It's sort of like stating the runtime efficiency of a Bogosort; the runtime efficiency is unbounded. Theoretically any list could be sorted on the first run, but it could also just keep sorting in an unbounded fashion for forever, though given enough time (which could be tens of trillions of years or longer), it will eventually be sorted if we assume regular distribution of random numbers.
The theorem is called "infinite monkey theorem". That means infinite amount of time. A googol years is not infinite, it's infinite times smaller than that. In an infinite amount of time they will write Shakespeare works (if they type randomly enough and not with some bias like never typing certain combinations of letters)
Also, at least the article could have said what the actual probability is then? Are we talking 1e-500, 1e-1000000, 1 / googolplex, or what?
EDIT: of the above examples 1e-1000000 is the closest I think (in order of magnitude of exponent), based on something like 30^5000000 divided through some amount of years assuming ~5 characters per word. So perhaps "If every atom in the universe was a universe in itself" won't get us there, but recursively repeat that process a million times and we do get there
Reverse Gellman effect: you witness pedants in your own domain surgically tear down a harmless puff piece and immediately know this must be happening 24/7 in every other domain where fun is modestly attempted
Yes, but the theorem is meant to explain what can happen in the universe over long time scales.
The point is that there isn't enough time in the universe for all the random stuff to happen that scientists pin on random chance. The theorem was memorable, but a cop out.
I don't think I've ever heard anyone use the infinite monkey example outside of theoretical mathematics; I'm sure someone has, but when I've heard it, it was to describe regular distribution of random.
I think it's a very bizarre thing for these mathematicians to act like they discovered something that I don't think anyone really disputed.
> Yes, but the theorem is meant to explain what can happen in the universe over long time scales.
I never understood it that way. I always interpreted it as a fun way to explain the mathematical truth that no matter how low a probability is, as long as it is technically above 0, the event it describes WILL eventually occur given enough time/trials/etc.
I can't see anybody ever interpreting it as a statement about the real, actual, universe. Just like I don't think anybody truly believes that flipping a real coin with non-identical sides (such as every currency coin I've ever used) must have EXACTLY 50% probability of landing on either side. Surely people can separate the mathematical ideal/concept from constraints of physical reality.
Perhaps not in our observable universe, but in the space of all possible physics that could take place and / or beyond the observable universe if it actually is infinitely big there, perhaps it can? (as in, anything can happen, there will be copies of the Earth with subtle differences somewhere there, Boltzmann brains appearing purely out of quantum fluctuations, etc...)
Yeah, I might be naive, since they're professional mathematicians and I'm not, but their conclusion feels like saying that they "tested" the concept of infinity and found that it's simply unreachable. To me, the infinite monkey theorem is an abstract idea and not something that needs to be "tested", though it's certainly fun to run the numbers.
They didn't "test" the concept of infinity. They said that the results of the infinite monkey theorem are well known, and they wanted to see what happens in the finite case.
Empirically a monkey did already creat the works of Shakespeare in a mere 13billion years. This is the time it took for the universe to form, earth to evolve to create a monkey that could write all those words down. The entire process from start to finish is way more complicated and random than anyone typing 26 character randomly into the keyboard. But I guess we are talking about uniform distributions here.
The reporting on this paper is crazy. In the highlights section of the paper’s page on the journal’s website it says “The long-established result of the Infinite Monkeys Theorem is correct, but misleading.” but the Guardian article says “Australian mathematicians call into question the ‘infinite monkey theorem’ in new research on old adage”, which I read as contradictory to what’s in the paper.
Sounds to me that the more interesting question would be graphing out the relationship of time and the amount of monkeys needed for one to write Shakespeare.
What if something in monkey psychology causes them to hit the keyboard with their fist or palm every five minutes or so of typing? What if they regularly lapse into just pressing one key over and over? It seems very unlikely to me that even with infinite monkeys a work would be generated, because they're not actually random number generators.
So let’s say we’ve got 28 possible characters (alphabet + period + space), you have a 1/28 chance of getting the first character right. So it follows that a string of length n is (1/28)^n because you have to hit each character correctly in a row. If we can do x guesses a second, and we know on average how many tries it takes on average to get our string right (28^n), we can divide and get an estimate of time. Though we could do it faster, or slower depending on our luck.
With multiple searchers it’s trickier, but we use the probability complement (probability of all possible events must add to 100%) to figure out the chances that our searchers all miss, and subtract that chance by 1. This gives the chance of at least one agent getting it right. Two searchers looks like 1-(27/28)^2 for the first char, and you can follow the same logic for any length string.
Your answer will heavily depend on your assumptions - how fast the computers guess, what they can guess, etc. But searching in parallel would speed things up dramatically. If you had like 100 computers searching simultaneously, 3 or 4 would likely get the first char right every time, giving you a big speed up on the problem.
Was this seriously a question? I mean, this is something a college student or advanced high schooler could calculate.
The law of large numbers is still correct. Nowhere is there a requirement that the large numbers be smaller than the various physical constants of the universe.p
From the original mathematics paper (open access):
in the long-running television show The Simpsons, the industrialist Charles Montgomery Burns attempted this with multiple monkeys chained to typewriters but gave up when the best one produced was the near-Dickensian “It was the best of times, it was the blurst of times”.
new HN game: predict which order the archive.is link or pedantry comes in. if you get the order right, you get one point. if you get sucked into a pedantic argument, you lose
Specialists in the business tier, where monkey business usually happens. Members of this clan have a traditional fighting style which involves throwing wrenches at their opponents; this is tolerated because it is not the worst thing that Monkeys have been known to throw.
There is a theory that a finite number of Laughing Monkeys pounding away randomly at a finite number of keyboards will eventually get a clean compile. The existence of the Standard PHP Library would seem to imply that this has already happened.
That's only worth mentioning because the PHP stdlib is mostly written in C. The general proposition (for any language) is trivially true since 93% of ink blobs are valid Perl programs.
The basic point is an exercise in at least one older freshman text (Kittel & Kroemer?) but at least this paper does go on beyond and finds some strings (eg "I chimp, therefore I am") that are likely to be typed before the heat death of the universe, if not before extinction of chimpanzees.
I mean, in several models, including the current most-likely one (last I heard) it effectively is finite. Eventually no new planets or stars can get created, and then some time after that, no atoms can exist.
Will the "universe" exist after that? Kind of a philosophical question, but nothing interesting will exist _in_ the universe.
The paper doesn’t say that either. They only call the original theorem misleading. I understood that their point is the same as yours - that infinity won’t happen in a finite time period.
> The long-established result of the Infinite Monkeys Theorem is correct, but misleading.
> Non-trivial text generation during the lifespan of our universe is almost certainly impossible.
> The Finite Monkeys case shows there will never be sufficient resources to generate Shakespeare.
You're comparing chalk and cheese to an extent, a perfect copy of Shakespeare's work can exist in exactly one way but DNA isn't required to be copied perfectly, DNA can change without necessarily killing the organism or otherwise rendering it unable to reproduce. Additionally evolution is not a random process, while mutations are random the process by which the mutations are selected is not - it depends on whether the mutation is a reproductive advantage for the organism in the context of their ecological niche. Most mutations are going to be neutral or disadvantageous so they won't be selected. Additionally you're not mutating the whole DNA sequence at once, just parts of it.
I would like to see someone use the most generous parameters possible and simulate the statistics of the first cple amino acids evolving into our homo sapiens sapiens chromosome set randomly over the course of just 4b years.
(selection is irrelevant here; we could even just model it as always-select-for-beneficial-mutation for extra graciousness).
Mind you, even if a beneficient mutation occurs, it is still not guaranteed to be passed onto the next generation. It would have to randomly occur over and over again.
I'm not confident that it's gonna be very convincingly in favour of the random mutation model.
>Mind you, even if a beneficient mutation occurs, it is still not guaranteed to be passed onto the next generation. It would have to randomly occur over and over again.
Why would it have to re-occur again and again? If the mutation is beneficial to the reproductive success of an organism, by definition their mutation will be passed on with no need for it to occur again by chance. The percentage of the population with the mutation would grow, potentially very quickly if it was a strong advantage over the previous generations. You're also characterising evolution as if to get from amoebas to humans you require a series of billions of independent coin-flips which isn't an accurate model, rather each generation is building on that which preceded it. Take the eye as an organ for example, it predates humanity by a very long time and was not independently evolved by humans - there was no need for it to have been since those genes already existed.
The predictions of evolution are generally supported by the available evidence at any rate, if you'd like to overturn it you'll have to propose a theory that explains the evidence more compellingly.
The actual problem is upstream of that at the abiogenesis stage.
For evolutionary selection to occur the machinery for selection must exist. Specifically information storage (DNA/RNA), replication(polymerases) and actioning (transcription) all are needed, and must continue to be able to exist for long enough to matter.
Without selection pressure and inheritance you're just left with requiring a big enough universe and enough time for randomness not to matter.
The critical difference is evolutionary only needs relatively short sequences to be randomly generated, and there’s many valid sequences.
Building a book by generating a single random text string is practically impossible, but if you lock in any given word that’s correct and retry that’ll quickly get something. You’ll have most 8 letter or shorter words correct after 1 trillion runs, and many 9 letter words. It wouldn’t be done, but someone could probably read and understand the work at that point.
Further it’s possible for a few even longer words to match at that point. People think it’s unlikely that specific sequences happened randomly, but what they ignore is all the potential sequences that didn’t occur.
The many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
> You’ll have most 6 letter or shorter words correct after 1 billion tries.
You think that's "quick" for dna which is made up of billions of 6-base-sequences and for a species that can only reproduce sexually once every decade or so at best?
> many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
There’s finite invalid sequences, DNA isn’t infinitely long. It’s also not a question of valid or invalid we live with sub optimal DNA, so yea most people aren’t born with some new beneficial mutation. However, not winning the lottery isn’t the same things a dying, and even smaller wins still benefit us.
As to our long reproductive cycle, there’s a reason we share so much in common with other primates. Most of our DNA has been worked out for a long time. We share 98% of our DNA with pigs, and 85% of it is identical in mice which has practical application in drug development. Of note common ancestors were more closely related to us because both branches diverged.
Hell 60% is shared with chickens, and half of it’s shared with trees.
> only reproduce sexually once every decade or so at best?
Many sperm and fetuses die from harmful mutations, live babies are late in the process here. Also, because order doesn’t matter you get multiple chances to roll the same sequence for every birth.
PS: There’s also quite a bit of viral insertion into our DNA, it’s mostly sexual reproduction but we have some single cell ‘ancestors’ in our recent history.
> The many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
I dont know about the relative numbers but I don't think you do either? Are you begging the question or can you quantify?
> You think that's "quick" for dna which is made up of billions of 6-base-sequences and for a species that can only reproduce sexually once every decade or so at best?
We didn't start from scratch, we are very very late in the game, and the groundwork for us was laid by millions of other species that can replicate quickly, often very quickly.
Unless you believe the Earth is only six thousand years or so old, in which case we might as well leave the discussion where it is.
You are just repeating the point that was being argued against.
If the lifetime of the universe isn't enough to randomly produce Shakespeare, 3-4 billions of years are cute but useless to randomly produce anything even near as sophisticated as our 46 chromosome set.
I imagine Dawkins still just repeats the conjecture "but it's 3-4 billion years, anything can happen!"
A work of Shakespeare is a single specific target, that's why it takes so long to hit it - one letter out doesn't cut it. DNA meanwhile is a general purpose animal kit that has many billions of 'correct answers'. It's not about sophistication.
Just give them a typewriter that records the position of where the key is pressed to a million bits accuracy, convert that to a series of ASCII characters (through a hash function), and the monkeys will type very fast.
Yeah exactly, it's a thought experiment about infinities... Is someone going to publish a study concluding that Hilbert's Hotel couldn't be built on Earth too?
Why are people even spending money on this research? What are they trying to prove/disprove?
But universe as we understand it is endless / infinite on time scale, no? [1] has some nice projections how even basic quantum stuff eventually breaks down, yet nothing is really ending, if you don't count matter as we know it.
These Mathematicians are very selective about which aspects of the universe they want to regard and which aspects they want ignore. Like, yeah you can take an imaginary situation and imagine that it's possible, or impossible. It can go any way because it's imaginary. No need to waste time on math, you can just imagine any outcome you like.
In reality, however, I think if you had a system that feeds a monkey a treat every time it strikes a letter key that corresponds to the next letter in a Shakespere play displayed on a monitor, you would eventually have some Shakespeare typed out by a monkey.
One could also just have a monkey mash keys for a while, and then after removing all the unnecessary letters you'd be left with a Shakespearian play.
Or, with a group of monkeys and plenty of time one could use natural selection to evolve them into a literate species that could handle the task easily. This is the method currently in use for the typing of Shakespeare. It has already been done, many times over.
> One could also just have a monkey mash keys for a while, and then after removing all the unnecessary letters you'd be left with a Shakespearian play.
What kind of "study" would this even be? I thought the point of the infinite monkey thing was to talk about regular distributions and eventualities of every possible string showing up. I don't think anyone claimed that any relatively-long string would show up in any reasonable amount of time necessarily, but it's kind of a bizarre assertion to make.
It's sort of like stating the runtime efficiency of a Bogosort; the runtime efficiency is unbounded. Theoretically any list could be sorted on the first run, but it could also just keep sorting in an unbounded fashion for forever, though given enough time (which could be tens of trillions of years or longer), it will eventually be sorted if we assume regular distribution of random numbers.
The theorem is called "infinite monkey theorem". That means infinite amount of time. A googol years is not infinite, it's infinite times smaller than that. In an infinite amount of time they will write Shakespeare works (if they type randomly enough and not with some bias like never typing certain combinations of letters)
Also, at least the article could have said what the actual probability is then? Are we talking 1e-500, 1e-1000000, 1 / googolplex, or what?
EDIT: of the above examples 1e-1000000 is the closest I think (in order of magnitude of exponent), based on something like 30^5000000 divided through some amount of years assuming ~5 characters per word. So perhaps "If every atom in the universe was a universe in itself" won't get us there, but recursively repeat that process a million times and we do get there
Reverse Gellman effect: you witness pedants in your own domain surgically tear down a harmless puff piece and immediately know this must be happening 24/7 in every other domain where fun is modestly attempted
Yes, but the theorem is meant to explain what can happen in the universe over long time scales.
The point is that there isn't enough time in the universe for all the random stuff to happen that scientists pin on random chance. The theorem was memorable, but a cop out.
I don't think I've ever heard anyone use the infinite monkey example outside of theoretical mathematics; I'm sure someone has, but when I've heard it, it was to describe regular distribution of random.
I think it's a very bizarre thing for these mathematicians to act like they discovered something that I don't think anyone really disputed.
> Yes, but the theorem is meant to explain what can happen in the universe over long time scales.
I never understood it that way. I always interpreted it as a fun way to explain the mathematical truth that no matter how low a probability is, as long as it is technically above 0, the event it describes WILL eventually occur given enough time/trials/etc.
I can't see anybody ever interpreting it as a statement about the real, actual, universe. Just like I don't think anybody truly believes that flipping a real coin with non-identical sides (such as every currency coin I've ever used) must have EXACTLY 50% probability of landing on either side. Surely people can separate the mathematical ideal/concept from constraints of physical reality.
Perhaps not in our observable universe, but in the space of all possible physics that could take place and / or beyond the observable universe if it actually is infinitely big there, perhaps it can? (as in, anything can happen, there will be copies of the Earth with subtle differences somewhere there, Boltzmann brains appearing purely out of quantum fluctuations, etc...)
Yeah, I might be naive, since they're professional mathematicians and I'm not, but their conclusion feels like saying that they "tested" the concept of infinity and found that it's simply unreachable. To me, the infinite monkey theorem is an abstract idea and not something that needs to be "tested", though it's certainly fun to run the numbers.
They didn't "test" the concept of infinity. They said that the results of the infinite monkey theorem are well known, and they wanted to see what happens in the finite case.
Reminds me of Archimedes calculating the number of grains of sand to fill the universe
Empirically a monkey did already creat the works of Shakespeare in a mere 13billion years. This is the time it took for the universe to form, earth to evolve to create a monkey that could write all those words down. The entire process from start to finish is way more complicated and random than anyone typing 26 character randomly into the keyboard. But I guess we are talking about uniform distributions here.
In graduate school we had to answer this on a test question and it took all of 5 minutes...should have published it, I guess
200,000 is quite a bit smaller than infinite.
The reporting on this paper is crazy. In the highlights section of the paper’s page on the journal’s website it says “The long-established result of the Infinite Monkeys Theorem is correct, but misleading.” but the Guardian article says “Australian mathematicians call into question the ‘infinite monkey theorem’ in new research on old adage”, which I read as contradictory to what’s in the paper.
Clearly not infinite monkeys or infinite time.
If either were truly infinite not only would this be possible, it would be mandatory and would occur infinite times.
Sounds to me that the more interesting question would be graphing out the relationship of time and the amount of monkeys needed for one to write Shakespeare.
That's done in the paper. Using a log(log) scale for time, measured in number of heat deaths of the universe. It's the most amusing part of the study.
They have arbitrarily shifted the goal posts to fit their conclusion:
> working out that even if all the chimpanzees in the world were given the entire lifespan of the universe, they would “almost certainly” never
Who assumed the original adage had the constraint of the universe’s lifetime.
Just read the paper directly: https://www.sciencedirect.com/science/article/pii/S277318632.... They do no such thing.
What if something in monkey psychology causes them to hit the keyboard with their fist or palm every five minutes or so of typing? What if they regularly lapse into just pressing one key over and over? It seems very unlikely to me that even with infinite monkeys a work would be generated, because they're not actually random number generators.
If you had infinite monkies, what is the probability that one of them is actually a random number generator
In the set of all infinite monkies there are infinite monkies that are random number generators
Oh, but these are idealized monkeys that act like perfect random generators. Almost, but not quite, unlike spherical cows.
A monkey with keyboard already wrote Shakespeare, duh. Other monkeys called it Shakespeare.
In other words: it's "easier" to make better monkeys than to scale up to produce something valuable.
Or in other other words: random generation is highly unlikely to produce something valuable.
Or in other other other words: the amount of useful information in "the library of babel" is miniscule compared to noise.
In other words the only way we know of how significant complexity arises in the universe is (ultimately) through evolution.
pedantry: that monkey was ~250 years before keyboards https://en.wikipedia.org/wiki/Typewriter#Hansen_Writing_Ball
Hurdy gurdys with keyboards have been around long before then.
I somehow doubt Shakespeare had access to a typewriter...
Someone really needs to go dig up his bones and give him a typewriter. Too bad Halloween was yesterday.
\s
Apes, thank you very much.
Ook!
Did anyone get the number of that donkey cart?
How long would it take a computer or set of computers to pick random characters and generate the complete work
how long for 1 paragraph how long for 1 page how long for 1 chapter
etc etc
Does it get harder/slower by a factor?
would be an interesting exercise
So let’s say we’ve got 28 possible characters (alphabet + period + space), you have a 1/28 chance of getting the first character right. So it follows that a string of length n is (1/28)^n because you have to hit each character correctly in a row. If we can do x guesses a second, and we know on average how many tries it takes on average to get our string right (28^n), we can divide and get an estimate of time. Though we could do it faster, or slower depending on our luck.
With multiple searchers it’s trickier, but we use the probability complement (probability of all possible events must add to 100%) to figure out the chances that our searchers all miss, and subtract that chance by 1. This gives the chance of at least one agent getting it right. Two searchers looks like 1-(27/28)^2 for the first char, and you can follow the same logic for any length string.
Your answer will heavily depend on your assumptions - how fast the computers guess, what they can guess, etc. But searching in parallel would speed things up dramatically. If you had like 100 computers searching simultaneously, 3 or 4 would likely get the first char right every time, giving you a big speed up on the problem.
Yes? The scaling factor is 2^(number of bits).
https://libraryofbabel.info/
it's all there regardless
Was this seriously a question? I mean, this is something a college student or advanced high schooler could calculate.
The law of large numbers is still correct. Nowhere is there a requirement that the large numbers be smaller than the various physical constants of the universe.p
it was the best of times, it was the blurst of times!?!?!
Relevant Dankmus: https://www.youtube.com/watch?v=9uYhIiW6lok
Wrong writer (Dickens).
From the original mathematics paper (open access):
in the long-running television show The Simpsons, the industrialist Charles Montgomery Burns attempted this with multiple monkeys chained to typewriters but gave up when the best one produced was the near-Dickensian “It was the best of times, it was the blurst of times”.
https://www.sciencedirect.com/science/article/pii/S277318632...
“to be, or not to be … that is the gazornenplat??”
yes, i understand the difference between Shakespeare and Dickens, it's a Simpsons reference to the infinite monkey theorem in play here
you must be a devil with the ladies
come for the archive.is link, scroll for the pedantry
theguardian has no paywall
< slow clap >
new HN game: predict which order the archive.is link or pedantry comes in. if you get the order right, you get one point. if you get sucked into a pedantic argument, you lose
we need more monkeys
I was recently reminded of The Codeless Code. In that set of stories, there's the Laughing Monkey Clan.
http://thecodelesscode.com/names/Laughing+Monkey+Clan
I recommend reading from the start ( http://thecodelesscode.com/case/1 ) and checking the links (since they sometimes have background information - http://thecodelesscode.com/names/Elephant%27s+Footprint+Clan ) and remembering image mouseovers.That's only worth mentioning because the PHP stdlib is mostly written in C. The general proposition (for any language) is trivially true since 93% of ink blobs are valid Perl programs.
https://www.mcmillen.dev/sigbovik/
(And wow! Search engines are getting even less useful with time. It's incredible!)
link to paper: https://www.sciencedirect.com/science/article/pii/S277318632...
The basic point is an exercise in at least one older freshman text (Kittel & Kroemer?) but at least this paper does go on beyond and finds some strings (eg "I chimp, therefore I am") that are likely to be typed before the heat death of the universe, if not before extinction of chimpanzees.
In other news - Life expectancy of the universe is finite
I mean, in several models, including the current most-likely one (last I heard) it effectively is finite. Eventually no new planets or stars can get created, and then some time after that, no atoms can exist.
Will the "universe" exist after that? Kind of a philosophical question, but nothing interesting will exist _in_ the universe.
No-one ever said it would happen within a finite time period, only that it would happen given infinite time.
That’s the whole point.
The paper doesn’t say that either. They only call the original theorem misleading. I understood that their point is the same as yours - that infinity won’t happen in a finite time period.
> The long-established result of the Infinite Monkeys Theorem is correct, but misleading.
> Non-trivial text generation during the lifespan of our universe is almost certainly impossible.
> The Finite Monkeys case shows there will never be sufficient resources to generate Shakespeare.
https://www.sciencedirect.com/science/article/pii/S277318632...
Along an infinite timeline more than one big bang may arise.
Yeah, but now you have another problem: the universe must reinvent monkeys.
Flawed point if it is used to justify the random speciation of evolution.
If they can't type out shakespeare, how can evolution randomly yet correctly type out billions of base sequences of dna in just a cple mil yrs?
You're comparing chalk and cheese to an extent, a perfect copy of Shakespeare's work can exist in exactly one way but DNA isn't required to be copied perfectly, DNA can change without necessarily killing the organism or otherwise rendering it unable to reproduce. Additionally evolution is not a random process, while mutations are random the process by which the mutations are selected is not - it depends on whether the mutation is a reproductive advantage for the organism in the context of their ecological niche. Most mutations are going to be neutral or disadvantageous so they won't be selected. Additionally you're not mutating the whole DNA sequence at once, just parts of it.
I would like to see someone use the most generous parameters possible and simulate the statistics of the first cple amino acids evolving into our homo sapiens sapiens chromosome set randomly over the course of just 4b years.
(selection is irrelevant here; we could even just model it as always-select-for-beneficial-mutation for extra graciousness).
Mind you, even if a beneficient mutation occurs, it is still not guaranteed to be passed onto the next generation. It would have to randomly occur over and over again.
I'm not confident that it's gonna be very convincingly in favour of the random mutation model.
>Mind you, even if a beneficient mutation occurs, it is still not guaranteed to be passed onto the next generation. It would have to randomly occur over and over again.
Why would it have to re-occur again and again? If the mutation is beneficial to the reproductive success of an organism, by definition their mutation will be passed on with no need for it to occur again by chance. The percentage of the population with the mutation would grow, potentially very quickly if it was a strong advantage over the previous generations. You're also characterising evolution as if to get from amoebas to humans you require a series of billions of independent coin-flips which isn't an accurate model, rather each generation is building on that which preceded it. Take the eye as an organ for example, it predates humanity by a very long time and was not independently evolved by humans - there was no need for it to have been since those genes already existed.
The predictions of evolution are generally supported by the available evidence at any rate, if you'd like to overturn it you'll have to propose a theory that explains the evidence more compellingly.
There's no evolutionary pressure to type out Shakespeare.
correctly is doing a lot of heavy lifting there, and the selection process in evolution is not random.
The actual problem is upstream of that at the abiogenesis stage.
For evolutionary selection to occur the machinery for selection must exist. Specifically information storage (DNA/RNA), replication(polymerases) and actioning (transcription) all are needed, and must continue to be able to exist for long enough to matter.
Without selection pressure and inheritance you're just left with requiring a big enough universe and enough time for randomness not to matter.
Not the selection itself is random,
but the existence of genetic variants to select from randomly arises due to random genetic mutations according to the theory
The critical difference is evolutionary only needs relatively short sequences to be randomly generated, and there’s many valid sequences.
Building a book by generating a single random text string is practically impossible, but if you lock in any given word that’s correct and retry that’ll quickly get something. You’ll have most 8 letter or shorter words correct after 1 trillion runs, and many 9 letter words. It wouldn’t be done, but someone could probably read and understand the work at that point.
Further it’s possible for a few even longer words to match at that point. People think it’s unlikely that specific sequences happened randomly, but what they ignore is all the potential sequences that didn’t occur.
The many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
> You’ll have most 6 letter or shorter words correct after 1 billion tries.
You think that's "quick" for dna which is made up of billions of 6-base-sequences and for a species that can only reproduce sexually once every decade or so at best?
> many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
There’s finite invalid sequences, DNA isn’t infinitely long. It’s also not a question of valid or invalid we live with sub optimal DNA, so yea most people aren’t born with some new beneficial mutation. However, not winning the lottery isn’t the same things a dying, and even smaller wins still benefit us.
As to our long reproductive cycle, there’s a reason we share so much in common with other primates. Most of our DNA has been worked out for a long time. We share 98% of our DNA with pigs, and 85% of it is identical in mice which has practical application in drug development. Of note common ancestors were more closely related to us because both branches diverged.
Hell 60% is shared with chickens, and half of it’s shared with trees.
> only reproduce sexually once every decade or so at best?
Many sperm and fetuses die from harmful mutations, live babies are late in the process here. Also, because order doesn’t matter you get multiple chances to roll the same sequence for every birth.
PS: There’s also quite a bit of viral insertion into our DNA, it’s mostly sexual reproduction but we have some single cell ‘ancestors’ in our recent history.
> The many valid sequences are relatively nothing compared to the infinitely many invalid ones, right?
I dont know about the relative numbers but I don't think you do either? Are you begging the question or can you quantify?
> You think that's "quick" for dna which is made up of billions of 6-base-sequences and for a species that can only reproduce sexually once every decade or so at best?
We didn't start from scratch, we are very very late in the game, and the groundwork for us was laid by millions of other species that can replicate quickly, often very quickly.
Unless you believe the Earth is only six thousand years or so old, in which case we might as well leave the discussion where it is.
Life has had 3-4 billions years to evolve on earth.
I suggest you read 'the blind watchmaker' by Dawkins to answer your other question.
You are just repeating the point that was being argued against.
If the lifetime of the universe isn't enough to randomly produce Shakespeare, 3-4 billions of years are cute but useless to randomly produce anything even near as sophisticated as our 46 chromosome set.
I imagine Dawkins still just repeats the conjecture "but it's 3-4 billion years, anything can happen!"
A work of Shakespeare is a single specific target, that's why it takes so long to hit it - one letter out doesn't cut it. DNA meanwhile is a general purpose animal kit that has many billions of 'correct answers'. It's not about sophistication.
it's not about the absolute number of correct answers
it's about the ratio of valid / invalid answers
which is somewhere very close to 0
Well, with an infinite number of monkeys, it could happen in a finite time.
As you say, this paper seems to miss the point.
It's sort of like calculating the probability of a hash collision. The speed at which the monkeys type is also a factor.
How fast do monkeys type?
Just give them a typewriter that records the position of where the key is pressed to a million bits accuracy, convert that to a series of ASCII characters (through a hash function), and the monkeys will type very fast.
>> For the timescale calculations, we assume that a monkey typist presses one key per second every second of the day
Yeah exactly, it's a thought experiment about infinities... Is someone going to publish a study concluding that Hilbert's Hotel couldn't be built on Earth too?
Why are people even spending money on this research? What are they trying to prove/disprove?
> Why are people even spending money on this research?
"This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors."
> Why are people even spending money on this research? What are they trying to prove/disprove?
That it takes much fewer than infinite humans and infinite time to produce the most pedantic paper possible.
It's just barely possible that this is not an entirely serious paper. Scientists like to have fun too, you know?
Not to mention that frivolous research has been known to pay surprising dividends.
Maybe they are after an Ig Nobel prize?
Franklin Open, whatever it may be, could well have slightly higher standards than Arxiv, but it isn't exactly Nature or Science...
(but try telling that to The Grauniad!)
Perhaps they're finitists, or ultrafinitists?
Many years ago someone complimented me saying I am extra fine; your comment makes me think I'll start identifying as an extrafinitist ╰(∀)╯
Also Zeno’s paradox is clearly untrue because Achilles would just kill the tortoise after a few days of running back and forth, starving.
I suspect that many academic papers are a petty move to win an argument that started one day over lunch.
i disagree rabble rabble rabble rabble
But universe as we understand it is endless / infinite on time scale, no? [1] has some nice projections how even basic quantum stuff eventually breaks down, yet nothing is really ending, if you don't count matter as we know it.
[1] https://en.wikipedia.org/wiki/Timeline_of_the_far_future
These Mathematicians are very selective about which aspects of the universe they want to regard and which aspects they want ignore. Like, yeah you can take an imaginary situation and imagine that it's possible, or impossible. It can go any way because it's imaginary. No need to waste time on math, you can just imagine any outcome you like.
In reality, however, I think if you had a system that feeds a monkey a treat every time it strikes a letter key that corresponds to the next letter in a Shakespere play displayed on a monitor, you would eventually have some Shakespeare typed out by a monkey.
One could also just have a monkey mash keys for a while, and then after removing all the unnecessary letters you'd be left with a Shakespearian play.
Or, with a group of monkeys and plenty of time one could use natural selection to evolve them into a literate species that could handle the task easily. This is the method currently in use for the typing of Shakespeare. It has already been done, many times over.
Hey man, remember to type out that play before the universe dies
> One could also just have a monkey mash keys for a while, and then after removing all the unnecessary letters you'd be left with a Shakespearian play.
ChatGPT?
Think an actual monkey might do better!