A small note, but GPS is only well-approximated by a circular uncertainty in specific conditions, usually open sky and long-time fixes. The full uncertainty model is much more complicated, hence the profusion of ways to measure error. This becomes important in many of the same situations that would lead you to stop treating the fix as a point location in the first place. To give a concrete example, autonomous vehicles will encounter situations where localization uncertainty is dominated by non-circular multipath effects.
If you go down this road far enough you eventually end up reinventing particle filters and similar.
Vehicle GPS is usually augmented by a lot of additional sensors and assumptions, notably the speedometer, compass, and knowledge the you'll be on one of the roads marked on its map. Not to mention a fast fix because you can assume you haven't changed position since you last powered on.
I always start my introductory course on Haskell with a demo of the Monty Hall problem with the probability monad and using rationals to get the exact probability of winning using the two strategies as a fraction.
If you are in an even more "approximate" mindset (as opposed to propagating by simulation to get real world re-sampled skewed distributions, as often happens in experimental physics labs, or at least their undergraduate courses), there is an error propagation (https://en.wikipedia.org/wiki/Propagation_of_uncertainty) simplification for "small" errors thing you can do. Then translating "root" errors to "downstream errors" is just simple chain rule calculus stuff. (There is a Nim library for that at https://github.com/SciNim/Measuremancer that I use at least every week or two - whenever I'm timing anything.)
It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.
for mechanical engineering drawings to communicate with machinists and the like, we use tolerances
eg. 10cm +8mm/-3mm
for what the acceptable range is, both bigger and smaller.
id expect something like "are we there yet" referencing GPS should understand the direction of the error and what directions of uncertainty are better or worse
Does this handle covariance between different variables? For example, the location of the object your measuring your distance to presumably also has some error in it's position, which may be correlated with your position (if, for example, if it comes from another GPS operating at a similar time).
Certainly a univarient model in the type system could be useful, but it would be extra powerful (and more correct) if it could handle covariance.
Monads are really undefeated. This particular application feels to me akin to wavefunction evolution? Density matrices as probability monads over Hilbert space, with unitary evolution as bind, measurement/collapse as pure/return. I guess everything just seems to rhyme under a category theory lens.
It was chosen to be implemented as a generic type in this design because the way that uncertainty "pollutes" underlying values maps well onto monads which were expressed through generics in this case.
A small note, but GPS is only well-approximated by a circular uncertainty in specific conditions, usually open sky and long-time fixes. The full uncertainty model is much more complicated, hence the profusion of ways to measure error. This becomes important in many of the same situations that would lead you to stop treating the fix as a point location in the first place. To give a concrete example, autonomous vehicles will encounter situations where localization uncertainty is dominated by non-circular multipath effects.
If you go down this road far enough you eventually end up reinventing particle filters and similar.
Vehicle GPS is usually augmented by a lot of additional sensors and assumptions, notably the speedometer, compass, and knowledge the you'll be on one of the roads marked on its map. Not to mention a fast fix because you can assume you haven't changed position since you last powered on.
This seems closely related to this classic Functional Pearl: https://web.engr.oregonstate.edu/~erwig/papers/PFP_JFP06.pdf
It’s so cool!
I always start my introductory course on Haskell with a demo of the Monty Hall problem with the probability monad and using rationals to get the exact probability of winning using the two strategies as a fraction.
If you are in an even more "approximate" mindset (as opposed to propagating by simulation to get real world re-sampled skewed distributions, as often happens in experimental physics labs, or at least their undergraduate courses), there is an error propagation (https://en.wikipedia.org/wiki/Propagation_of_uncertainty) simplification for "small" errors thing you can do. Then translating "root" errors to "downstream errors" is just simple chain rule calculus stuff. (There is a Nim library for that at https://github.com/SciNim/Measuremancer that I use at least every week or two - whenever I'm timing anything.)
It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.
Could this be implemented in Rust or Clojure?
Does Anglican kind of do this?
for mechanical engineering drawings to communicate with machinists and the like, we use tolerances
eg. 10cm +8mm/-3mm
for what the acceptable range is, both bigger and smaller.
id expect something like "are we there yet" referencing GPS should understand the direction of the error and what directions of uncertainty are better or worse
Arguably Uncertain should be the default, and you should have to annotate a type as certain T when you are really certain. ;)
Only for physical measurements. For things like money, you should be pretty certain, often down to exact fractional cents.
It appears that a similar approach is implemented in some modern Fortran libraries.
A complement to Optional.
Does this handle covariance between different variables? For example, the location of the object your measuring your distance to presumably also has some error in it's position, which may be correlated with your position (if, for example, if it comes from another GPS operating at a similar time).
Certainly a univarient model in the type system could be useful, but it would be extra powerful (and more correct) if it could handle covariance.
If you need to track covariance you might want to play with gvar https://gvar.readthedocs.io/en/latest/ in python.
To properly model quantum mechanics, you’d have to associate a complex-valued wave function with any set of entangled variables you might have.
Monads are really undefeated. This particular application feels to me akin to wavefunction evolution? Density matrices as probability monads over Hilbert space, with unitary evolution as bind, measurement/collapse as pure/return. I guess everything just seems to rhyme under a category theory lens.
Relevant (2006): https://web.engr.oregonstate.edu/~erwig/pfp/
Is this essentially a programmatic version of fuzzy logic?
https://en.wikipedia.org/wiki/Fuzzy_logic
https://en.wikipedia.org/wiki/Probabilistic_programming more like. It is already a thing; see, for example, https://pyro.ai/
Always enjoy mattt’s work. Looks like a great library.
[flagged]
> And why does it need to be part of the type system?
As presented in the article, it is indeed just a library.
It was chosen to be implemented as a generic type in this design because the way that uncertainty "pollutes" underlying values maps well onto monads which were expressed through generics in this case.
> What if I want Bayesian?
Bayes is mentioned on page 46.
> And why does it need to be part of the type system? It could be just a library.
It is a library that defines a type.
It is not a new type system, or an extension to any particularly complicated type system.
> Am I missing something?
Did you read it?
https://www.microsoft.com/en-us/research/wp-content/uploads/...
https://github.com/klipto/Uncertainty/
> Bayes is mentioned on page 46.
Bayes isn't mentioned in the linked article. But thanks for the links.
I don't think inference is part of this at all, frequentist or otherwise.
It's not part of the type system, it's just the giry monad as a library.