I really like the Poisson Distribution. A very interesting question I've come across once is:
A given event happens at a rate of every 10 minutes on average. We can see that:
- The expected length of the interval between events is 10 minutes.
- At a random moment in time the expected wait until the next event is 10 minutes.
- At the same moment, the expected time passed since the last event is also 10 minutes.
But then we would expect the interval between two consecutive events to be 10+10 = 20 minutes long. But we know intervals are 10 on average. What happened here?
The key is that by picking a random moment in time, you're more likely to fall into a bigger intervals. By sampling a random point in time the average interval you fall into really is 20 minutes long, but by sampling a random interval it is 10.
Apparently this is called the Waiting Time Paradox.
You went astray when you declared the expected wait and expected passed.
Draw a number line. Mark it at intervals of 10. Uniformly randomly select a point on that line. The expected average wait and passed (ie forward and reverse directions) are both 5, not 10. The range is 0 to 10.
When you randomize the event occurrences but maintain the interval as an average you change the range maximum and the overall distribution across the range but not the expected average values.
When you randomize the event occurences, you create intervals that are shorter and longer than average, so that a random point is more likely to be in a longer interval, so that the expected length of the interval containing a random point is greater than the expected length of a random interval.
To see this, consider just two intervals of length x and 2-x, i.e. 1 on average. A random point is in the first interval x/2 of the time and in the second one the other 1-x/2 of the time, so the expected length of the interval containing a random point is x/2 * x + (1-x/2) * (2-x) = x² - 2x + 2, which is 1 for x = 1 but larger everywhere else, reaching 2 for x = 0 or 2.
The way, I understand it is that with a Poisson process, at every small moment in time there’s a small chance of the event happening. This leads to on average lambda events occurring during every (larger) unit of time.
But this process has no “memory” so no matter how much time has passed since the last event, the number of events expected during the next unit of time is still lambda.
From last event to this event = 10, from this event to next event = 10, so the time between the first and the third event is 20, where is the surprise in the Waiting Time Paradox?, sure I must be missing some key ingredient here.
The random moment we picked in time is not necessarily an event. The expected time between the event to your left and the one to your right (they're consecutive) is 20 minutes.
Poisson distributions are sort of like the normal distribution for queuing theory for two main reasons:
1. They're often a pretty good approximation for how web requests (or whatever task your queuing system deals with) arrive into your system, as long as your traffic is predominantly driven by many users who each act independently. (If your traffic is mostly coming from a bot scraping your site that sends exactly N requests per second, or holds exactly K connections open at a time, the Poisson distribution won't hold.) Sort of like how the normal distribution shows up any time you sum up enough random variables (central limit theorem), the Poisson arrival process shows up whenever you superimpose enough uncorrelated arrival processes together: https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theore...
There is another extremely important way in which they are like the normal distribution: both are maximum entropy distributions, i.e. each is the "most generic" within their respective families of distributions.
So is Gamma, Binomial, Bernoulli, negative-Binomial, exponential and many many more. Maxent distribution types are very common. In fact the entire family of distributions in the exponential family are Maxent distributions.
Useful for understanding load on machines. One case I had was -- N machines randomly updating a central database. The database can only handle M queries in one second. What's the chance of exceeding M?
Also related to the Birthday Problem and hash bucket hits. Though with those you're only interested in low collisions. With some queues (e.g. database above) you might be interested when collisions hit a high number.
Famously used by Thomas Pynchon in Gravity's Rainbow. The notion of obtaining a distribution of random rocket attacks blew my young mind and prompted a life-long interest in the sturdy of statistics.
But this just gives the definition of the distribution. No intuition about where it might have come from, it just appears magically out of thin air and shows some properties it has in the limit.
Poisson, Pareto/power/zipf and normal distributions are really important. The top 3 for me. (What am I missing?) And often misused (most often normal). It’s really good to know which to use when
It's surprising that so few people bother to use non-parametric probability distributions. With today's computational resources, there is no need for parametric closed form models (may be with the exception of Normal for historical reasons), each dataset contains its own distribution.
It’s easier to do MCMC when the distributions at hand have nice analytic properties so you can take derivatives etc. You should also have a very good understanding of the standards distributions and how they all relate to each other
I can understand a message that javascript needs to be enabled for your ** site.
But permanently redirecting so I can't see this after I enable javascript is just uncool and might not endear one on site like hn where lots of folks disable js initially.
Edit: and anonymizing, disabling and reloading... It's just text with formatted math. Sooo many other solutions to this, jeesh guys.
I really like the Poisson Distribution. A very interesting question I've come across once is:
A given event happens at a rate of every 10 minutes on average. We can see that:
- The expected length of the interval between events is 10 minutes.
- At a random moment in time the expected wait until the next event is 10 minutes.
- At the same moment, the expected time passed since the last event is also 10 minutes.
But then we would expect the interval between two consecutive events to be 10+10 = 20 minutes long. But we know intervals are 10 on average. What happened here?
The key is that by picking a random moment in time, you're more likely to fall into a bigger intervals. By sampling a random point in time the average interval you fall into really is 20 minutes long, but by sampling a random interval it is 10.
Apparently this is called the Waiting Time Paradox.
> What happened here?
You went astray when you declared the expected wait and expected passed.
Draw a number line. Mark it at intervals of 10. Uniformly randomly select a point on that line. The expected average wait and passed (ie forward and reverse directions) are both 5, not 10. The range is 0 to 10.
When you randomize the event occurrences but maintain the interval as an average you change the range maximum and the overall distribution across the range but not the expected average values.
When you randomize the event occurences, you create intervals that are shorter and longer than average, so that a random point is more likely to be in a longer interval, so that the expected length of the interval containing a random point is greater than the expected length of a random interval.
To see this, consider just two intervals of length x and 2-x, i.e. 1 on average. A random point is in the first interval x/2 of the time and in the second one the other 1-x/2 of the time, so the expected length of the interval containing a random point is x/2 * x + (1-x/2) * (2-x) = x² - 2x + 2, which is 1 for x = 1 but larger everywhere else, reaching 2 for x = 0 or 2.
If it wasn't clear, their statements are all true when the events follow a poisson distribution/have exponentially distributed waiting times.
The way, I understand it is that with a Poisson process, at every small moment in time there’s a small chance of the event happening. This leads to on average lambda events occurring during every (larger) unit of time.
But this process has no “memory” so no matter how much time has passed since the last event, the number of events expected during the next unit of time is still lambda.
From last event to this event = 10, from this event to next event = 10, so the time between the first and the third event is 20, where is the surprise in the Waiting Time Paradox?, sure I must be missing some key ingredient here.
The random moment we picked in time is not necessarily an event. The expected time between the event to your left and the one to your right (they're consecutive) is 20 minutes.
Poisson distributions are sort of like the normal distribution for queuing theory for two main reasons:
1. They're often a pretty good approximation for how web requests (or whatever task your queuing system deals with) arrive into your system, as long as your traffic is predominantly driven by many users who each act independently. (If your traffic is mostly coming from a bot scraping your site that sends exactly N requests per second, or holds exactly K connections open at a time, the Poisson distribution won't hold.) Sort of like how the normal distribution shows up any time you sum up enough random variables (central limit theorem), the Poisson arrival process shows up whenever you superimpose enough uncorrelated arrival processes together: https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theore...
2. They make the math tractable -- you can come up with closed-form solutions for e.g. the probability distribution of the number of users in the system, the average waiting time, average number of users queuing, etc: https://en.wikipedia.org/wiki/M/M/c_queue#Stationary_analysi... https://en.wikipedia.org/wiki/Erlang_(unit)#Erlang_B_formula
There is another extremely important way in which they are like the normal distribution: both are maximum entropy distributions, i.e. each is the "most generic" within their respective families of distributions.
[1] https://en.wikipedia.org/wiki/Poisson_distribution#Maximum_e...
[2] https://en.wikipedia.org/wiki/Normal_distribution#Maximum_en...
So is Gamma, Binomial, Bernoulli, negative-Binomial, exponential and many many more. Maxent distribution types are very common. In fact the entire family of distributions in the exponential family are Maxent distributions.
Useful for understanding load on machines. One case I had was -- N machines randomly updating a central database. The database can only handle M queries in one second. What's the chance of exceeding M?
Also related to the Birthday Problem and hash bucket hits. Though with those you're only interested in low collisions. With some queues (e.g. database above) you might be interested when collisions hit a high number.
An application of the Poisson distribution (1946)
https://garcialab.berkeley.edu/courses/papers/Clarke1946.pdf
Famously used by Thomas Pynchon in Gravity's Rainbow. The notion of obtaining a distribution of random rocket attacks blew my young mind and prompted a life-long interest in the sturdy of statistics.
This site is pretty helpful for me with this sort of thing. The style is more technical though.
https://www.acsu.buffalo.edu/~adamcunn/probability/probabili...
But this just gives the definition of the distribution. No intuition about where it might have come from, it just appears magically out of thin air and shows some properties it has in the limit.
At work we use Arena to model various systems and Poisson is our go to.
Poisson, Pareto/power/zipf and normal distributions are really important. The top 3 for me. (What am I missing?) And often misused (most often normal). It’s really good to know which to use when
It's surprising that so few people bother to use non-parametric probability distributions. With today's computational resources, there is no need for parametric closed form models (may be with the exception of Normal for historical reasons), each dataset contains its own distribution.
It’s easier to do MCMC when the distributions at hand have nice analytic properties so you can take derivatives etc. You should also have a very good understanding of the standards distributions and how they all relate to each other
How hard is it to estimate that distribution for modern high dimensional data?
> What am I missing?
Beta
Normal is overused for sometimes sensible reasons though. The CLT is really handy when you have to consider sums
Lightbulbs burn out, but when?
Later
I can understand a message that javascript needs to be enabled for your ** site.
But permanently redirecting so I can't see this after I enable javascript is just uncool and might not endear one on site like hn where lots of folks disable js initially.
Edit: and anonymizing, disabling and reloading... It's just text with formatted math. Sooo many other solutions to this, jeesh guys.
It's notion, I don't know why people use this service.
Steve, le
What’s special about this treatment? It’s the 101 part of a 101 probability course.