NEAT feels like "the path that wasn't taken" to me. It can't see the GPU speedups that matrix methods can, but offers more understandable and flexible networks.
It was the first ML algorithm i ever implemented back in highschool when it came out, there was a "Game" called NERO https://nn.cs.utexas.edu/nero/video.php that was a joy for a teenager to play with and really start to understand at a guy level, what was going on in the networks.
NEAT feels like "the path that wasn't taken" to me. It can't see the GPU speedups that matrix methods can, but offers more understandable and flexible networks.
It was the first ML algorithm i ever implemented back in highschool when it came out, there was a "Game" called NERO https://nn.cs.utexas.edu/nero/video.php that was a joy for a teenager to play with and really start to understand at a guy level, what was going on in the networks.
/? parameter-free network: https://scholar.google.com/scholar?q=Parameter-free%20networ...
"Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?" (2024) https://news.ycombinator.com/item?id=41794249 :
> Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
Do these GAs for (hyper)parameter estimation converge given different random seeds?