Maybe somebody else can do a more accurate or better job summarizing things, but I'll take a shot.
The Jacobian is most commonly introduced when you use chain rules and change of variables in the last part of a 3 part calculus course series. Look up the Jacobian for the transformation between x,y,z and ρ, φ, θ, and notice it is in a matrix form. Glancing over your medium lobosi article it seems it emphasizes this aspect of it.
But it has another useful feature "The Jacobian operator Df:x⟼Df(x) is a linear map which provides the best linear approximation of f around a given point x."
We like approximations, we can make a speed vs accuracy/memory trade off, you only have so much space in a register or memory cell, and trying to get more accuracy past a certain point takes more memory/computations/time.
Then it discusses how many types of computations have Jacobians that are sparse, that is that the matrices have spaces that we don't have to waste our time on for our computations if we are clever enough.
Now the important parts afterwards are different ways to detect and mark the patterns of sparsity. Then they discuss how doing their suggested coloring pattern methods on the large matrices typical in machine learning has huge savings and benefits.
As far as the mathematical heritage, I don't know the family tree, but I'd speculate something a course with matrix theory and linear algebra along with an algorithms component, so the computer science variant of such things. Functional approximation is numerical methods, though I don't know if they cover Jacobians in an introductory book. I'd check out Newton's Approximation, understand how that works, then look up the Jacobian extension to it. For the coloring part, graph theory would be helpful, you can learn the basics of that without too much background, just look up the 7 bridges problem and the map coloring problem, five color version. A lot of these can be turned toy enough to write your own programs for, it won't be matlab, but it would cement your understanding.
Picking my way through this slowly... I'm familiar with autodiff but some of these ideas are very new to me. This seems really, really exciting though.
This paper is written by three Europeans who clearly understand these mathematical ideas.
Is this type of analysis a part of a particular mathematical heritage ?
What would it be called ?
Is this article relevant ? https://medium.com/@lobosi/calculus-for-machine-learning-jac...
Maybe somebody else can do a more accurate or better job summarizing things, but I'll take a shot.
The Jacobian is most commonly introduced when you use chain rules and change of variables in the last part of a 3 part calculus course series. Look up the Jacobian for the transformation between x,y,z and ρ, φ, θ, and notice it is in a matrix form. Glancing over your medium lobosi article it seems it emphasizes this aspect of it.
But it has another useful feature "The Jacobian operator Df:x⟼Df(x) is a linear map which provides the best linear approximation of f around a given point x."
We like approximations, we can make a speed vs accuracy/memory trade off, you only have so much space in a register or memory cell, and trying to get more accuracy past a certain point takes more memory/computations/time.
Then it discusses how many types of computations have Jacobians that are sparse, that is that the matrices have spaces that we don't have to waste our time on for our computations if we are clever enough.
Now the important parts afterwards are different ways to detect and mark the patterns of sparsity. Then they discuss how doing their suggested coloring pattern methods on the large matrices typical in machine learning has huge savings and benefits.
As far as the mathematical heritage, I don't know the family tree, but I'd speculate something a course with matrix theory and linear algebra along with an algorithms component, so the computer science variant of such things. Functional approximation is numerical methods, though I don't know if they cover Jacobians in an introductory book. I'd check out Newton's Approximation, understand how that works, then look up the Jacobian extension to it. For the coloring part, graph theory would be helpful, you can learn the basics of that without too much background, just look up the 7 bridges problem and the map coloring problem, five color version. A lot of these can be turned toy enough to write your own programs for, it won't be matlab, but it would cement your understanding.
I quickly realized it was approximately 20,000 ft over my head, but I still power through these sort of things to see if anything "sticks".
So far, nothing but I'll keep trying ..
Picking my way through this slowly... I'm familiar with autodiff but some of these ideas are very new to me. This seems really, really exciting though.