Very cool! I am a good Tetris player (in the top 10% of players) and wanted to give brick yeeting against an LLM a spin.
Some feedback:
- Knowing the scoring system is helpful when going 1v1 high score
- Use a different randomization system, I kept getting starved for pieces like I. True random is fine, throwing a copy of every piece into a bag and then drawing them one by one is better (7 bag), nearly random with some lookbehind to prevent getting a string of ZSZS is solid, too (TGM randomizer)
I actually grew up playing the Spectrum HoloByte version of Tetris for PC, which only lets you rotate in one direction. As a result, I ended up playing NES Tetris for years as a kid before realizing it lets you rotate clockwise / counterclockwise!
The worst thing is that the delayed auto shift is slightly off and it messes my finesse. (I used to play competitive tetris as well, but between getting older -> worse reflexes and vision problems I can't really play anymore. Weirdly, finesse muscle memory is still working.)
I don't think the goal is to make a PvP simulator, it would be too easy to cheese or do weird strategies. It's mostly for LLMs to play.
On the topic of reflexes decaying (I'm getting there, in my late 30s): Have you played Stackflow? It's a number go up roguelite disguised as an arcade brick stacking game, but the gravity is low enough that it is effectively turn based. More about 'deck' building, less about chaining PCs and C-Spins.
Thanks for all the questions! More details on how this works:
- Each model starts with an initial optimization function for evaluating Tetris moves.
- As the game progresses, the model sees the current board state and updates its algorithm—adapting its strategy based on how the game is evolving.
- The model continuously refines its optimizer. It decides when it needs to re-evaluate and when it should implement the next optimization function
- The model generates updated code, executes it to score all placements, and picks the best move.
- The reason I reframed this problem to a coding problem is Tetris is an optimization game in nature. At first I did try asking LLMs where to place each piece at every turn but models are just terrible at visual reasoning. What LLMs great at though is coding.
Interesting but frustratingly vague on details. How exactly are the models playing? Is it using some kind of PGN equivalent in Tetris that represents a on-going game, passing an ASCII representation, encoding as a JSON structure, or just directly sending screenshots of the game to the various LLMs?
It has to be turn-based. Even with Flash's speed, the inference latency would kill you in a real-time loop. They're likely pausing the game state after every tick to wait for the API response before resuming.
answered this in a comment above! It's not turn or visual layout based since LLMs are not trained that way. The representation is a JSON structure, but LLMs plug in algorithms and keeps optimizing it as the game state evolves
Gemini 3 Flash is at a very nice point along the price-performance curve. A good workhorse model, while supplementing it with Opus 4.5 / Gemini 3 Pro for more complex tasks.
It would be more interesting to make it build a chess engine and compare it against Stockfish. The chess engine should be a standalone no-dependencies C/C++ program that fits in NNN lines of code.
My back-of-the-envelope guess would be that 99% of LLMs given the task to build a chess engine would probably just end up implementing a flavor of negamax and calling it a day.
Comparing against stockfish isn't fair. That's comparing against enormous amounts of compute spent experimenting with strategies, training neutral nets, etc.
It will lose so badly there will be no point in the comparison.
Besides you could compare models (and harnesses) directly against eachother.
I mean, if you let the LLM build a testris bot, it would be 1000x better than what the LLMs are doing. So yes, it is fun to win against an AI, but to be fair against such processing power, you should not be able to win. It is only possible because LLMs are not built for such tasks.
Very cool! I am a good Tetris player (in the top 10% of players) and wanted to give brick yeeting against an LLM a spin.
Some feedback: - Knowing the scoring system is helpful when going 1v1 high score
- Use a different randomization system, I kept getting starved for pieces like I. True random is fine, throwing a copy of every piece into a bag and then drawing them one by one is better (7 bag), nearly random with some lookbehind to prevent getting a string of ZSZS is solid, too (TGM randomizer)
- Piece rotation feels left-biased, and keeps making me mis-drop, like the T pieces shift to the left if you spin 4 times. Check out https://tetris.wiki/images/thumb/3/3d/SRS-pieces.png/300px-S... or https://tetris.wiki/images/b/b5/Tgm_basic_ars_description.pn... for examples of how other games are doing it.
- Clockwise and counter-clockwise rotation is important for human players, we can only hit so many keys per second
- re-mappable keys are also appreciated
Nice work, I'm going to keep watching.
I actually grew up playing the Spectrum HoloByte version of Tetris for PC, which only lets you rotate in one direction. As a result, I ended up playing NES Tetris for years as a kid before realizing it lets you rotate clockwise / counterclockwise!
https://en.wikipedia.org/wiki/Tetris_(Spectrum_HoloByte)
The worst thing is that the delayed auto shift is slightly off and it messes my finesse. (I used to play competitive tetris as well, but between getting older -> worse reflexes and vision problems I can't really play anymore. Weirdly, finesse muscle memory is still working.)
I don't think the goal is to make a PvP simulator, it would be too easy to cheese or do weird strategies. It's mostly for LLMs to play.
Hello fellow Tetris nerd with a -sort username :)
On the topic of reflexes decaying (I'm getting there, in my late 30s): Have you played Stackflow? It's a number go up roguelite disguised as an arcade brick stacking game, but the gravity is low enough that it is effectively turn based. More about 'deck' building, less about chaining PCs and C-Spins.
Thanks for all the questions! More details on how this works:
- Each model starts with an initial optimization function for evaluating Tetris moves.
- As the game progresses, the model sees the current board state and updates its algorithm—adapting its strategy based on how the game is evolving.
- The model continuously refines its optimizer. It decides when it needs to re-evaluate and when it should implement the next optimization function
- The model generates updated code, executes it to score all placements, and picks the best move.
- The reason I reframed this problem to a coding problem is Tetris is an optimization game in nature. At first I did try asking LLMs where to place each piece at every turn but models are just terrible at visual reasoning. What LLMs great at though is coding.
Interesting but frustratingly vague on details. How exactly are the models playing? Is it using some kind of PGN equivalent in Tetris that represents a on-going game, passing an ASCII representation, encoding as a JSON structure, or just directly sending screenshots of the game to the various LLMs?
It has to be turn-based. Even with Flash's speed, the inference latency would kill you in a real-time loop. They're likely pausing the game state after every tick to wait for the API response before resuming.
answered this in a comment above! It's not turn or visual layout based since LLMs are not trained that way. The representation is a JSON structure, but LLMs plug in algorithms and keeps optimizing it as the game state evolves
Gemini 3 Flash is at a very nice point along the price-performance curve. A good workhorse model, while supplementing it with Opus 4.5 / Gemini 3 Pro for more complex tasks.
Guys, I don't know how to tell you but... Tetris can web solved without LLM...
It's actually 80% against Opus, 66% average against the 5 models it's tested with.
... and what does this prove? what can you decide to use one LLM to solve over another based on this tetrisbench besides play tetris?
I imagine this is because Tetris is visual and the Gemini models are strong visually.
I figure OP would try and give the models pure text forms of the game?
.....
l....
l....
l.ttt
l..t.
watch link?
I'd like to see a nethackbench.
It would be more interesting to make it build a chess engine and compare it against Stockfish. The chess engine should be a standalone no-dependencies C/C++ program that fits in NNN lines of code.
My back-of-the-envelope guess would be that 99% of LLMs given the task to build a chess engine would probably just end up implementing a flavor of negamax and calling it a day.
https://en.wikipedia.org/wiki/Negamax
Comparing against stockfish isn't fair. That's comparing against enormous amounts of compute spent experimenting with strategies, training neutral nets, etc.
It will lose so badly there will be no point in the comparison.
Besides you could compare models (and harnesses) directly against eachother.
There are some concepts clashing here.
I mean, if you let the LLM build a testris bot, it would be 1000x better than what the LLMs are doing. So yes, it is fun to win against an AI, but to be fair against such processing power, you should not be able to win. It is only possible because LLMs are not built for such tasks.
Fun fact: Humans were not build for playing Tetris either!
Task: play tetris
Task: write and optimize a tetris bot
Task: write and safely online optimize a tetris bot with consideration for cost to converge
openai/baselines (7 years ago) was leading on RL and then AlphaZero and Self-Attention Transformer networks.
LLMs are trained with RL, but aren't general purpose game theoretic RL agents?