Some cool optimisations here: MAP elites, island models to prevent premature convergence & fast rejection of bad candidates.
What's particularly interesting is the meta level insight: The system discovered scipy.optimize.SLSQP for circle packing - a completely different algorithmic paradigm than it started with. It's genuinely discovering new approaches, not just parameter-tuning.
It doesn't mention it in the article, but guessing this is based on / inspired by AlphaEvolve?
Though I'm not sure the public can access AlphaEvolve yet.
(https://arxiv.org/abs/2506.13131)
If AlphaEvolve is: "a quality-diversity search framework for algorithm discovery" then maybe.
At the moment I'm mildly skeptical and uncertain of whether to twist or stick.
Very interesting that the LLM weights are co-evolved and reasoning skills improve!
Some cool optimisations here: MAP elites, island models to prevent premature convergence & fast rejection of bad candidates.
What's particularly interesting is the meta level insight: The system discovered scipy.optimize.SLSQP for circle packing - a completely different algorithmic paradigm than it started with. It's genuinely discovering new approaches, not just parameter-tuning.
Sakana.ai improved on this by honing in on sample efficiency iirc with shinkaevolve (which is open source and not an ai slop project)