Presenter of that talk here, very cool to see it being shared here.
Running 1BRC was immensely fun, I learned a ton from it. Had you told me before how far the community would be able to push this, I'd have not believed you.
One main take-away for me was that you could improve performance by one order of magnitude over the baseline basically by just doing a good job and avoiding basic mistakes. The resulting code is still well readable and maintainable. In most scenarios, this is where you should stop.
If you want to improve by another order of magnitude (like leaders in the challenge did), code becomes completely non-idiomatic, super-dense, and hard to maintain. You should go there only where it really, really matters, like when building a database kernel for instance. Or well, when trying to win a coding challenge ;)
* 3h deep-dive into the implementation techniques by Thomas Würthinger and Roy van Rijn, two of the top participants of the challenge: https://www.youtube.com/watch?v=_w4-BqeeC0k
> One main take-away for me was that you could improve performance by one order of magnitude over the baseline basically by just doing a good job and avoiding basic mistakes. The resulting code is still well readable and maintainable. In most scenarios, this is where you should stop.
I’ve done a lot of performance work but never heard this expressed so clearly before. Thanks - I’m stealing that.
I’ve found exactly the same thing in my own work optimising text CRDTs. Just writing crdt code in a straightforward, correct way, without unnecessary allocations (and ideally using some good data structures) will get you very, very far. But there’s another order of magnitude available to anyone interested in using exotic, task specific data structures.
I suspect the same is true in just about every field of computer science. Write good code in go / c# / Java / JavaScript / etc and 99% of the time, performance will be just fine. Well, so long as you don’t do anything silly like pull in immutablejs. But there’s usually 10-100x more performance available if you really try.
If you want some examples, I highly recommend Algorithms for Modern Hardware, available for free online:
As part of learning zig I did an implementation in zig. It got under 1 second within the contest rules on a 6 year old desktop machine. I need to write up a blog when I get the chance.
Besides the standard tricks to speed up the running time like threads and mmap, the biggest speed up is to use a custom hash table that is tuned for the data. I have implemented 5 different versions of hash tables including the Swiss hash table and more recent researched ones. None of them beats a simple linear probing approach. It’s because there’re so few keys, only couple hundreds of them. Tuning a custom hashing function to minimize key collision helps more than anything.
There are also lots of SIMD operations, which Zig has great support in the form of vector programming.
The only thing novel I did was to line up the temperature data from the back instead of from the front for the SIMD operations. That is for temperatures 12.3, 4.5, 67.8, 0.1, I line them up by the last digit after the decimal before packing them into a SIMD register. That way I know their exact positions in the register.
I had similar results exploring hash tables for a nearly identical task here[0].
It seems like swiss tables are tuned for workloads of mostly gets of keys that are not present. Otherwise the two-level lookup is an extra cost with not much benefit.
IIRC Joad Nacer says he is using a hash table for this (again, essentially identical) task on highload.fun[1]. This was sort if surprising for the other competitors because the top couple solutions below that one use some variety of bloom filter instead.
When it comes to custom hash tables, I always remember this video [0] by Strager about implementing the perfect hash table and hash function. Fascinating insights and perf hints.
Gosh, ant thnx for sharing. Hope this would never become a mainstream during coding interview. Imaging - please list top 10 hash table implementations and its time and space complexity
Modern hardware has really impressive speed. A single core of my not-so-new M2 laptop can decompress data at more than 4 GB/s using lz4. Sometimes I wonder where all that performance goes, eg when waiting for Slack to open ;)
Most people seriously underestimate how much power there is in a modern Intel/ARM server.
Many businesses think "we have a lot of products". Nope, even when you have more than a million products we can load all of them into RAM, even with 4 KB of memory allocate to each product.
So a million products is small business from a "load into RAM" perspective. Easily handled on a single server.
It's not that much, but it's also not nothing. The input file of the challenge was very narrow, just a relatively short text column and a numeric value. It still is >13 GB uncompressed; an actual table with a few more columns with 1B rows will easily be beyond 100 GB. And chances are it's not the only table in your database, so it won't fit into RAM of most common database machines. I.e. still a non-trivial dataset in most contexts.
I think most of the optimizations were pretty low level and had nothing to do with Java itself. You could implement the same techniques in any other (compiled) language and get similar performance results. Things like cache locality, ILP, work stealing, multithreading, SWAR etc. In fact, the top solutions mostly used unsafe to use off-heap memory to avoid GC and copying of data.
Twice as slow as top submissions in C# and C++ which is still pretty good for Go but a likely indicator that memory here has become the major bottleneck.
Presenter of that talk here, very cool to see it being shared here.
Running 1BRC was immensely fun, I learned a ton from it. Had you told me before how far the community would be able to push this, I'd have not believed you.
One main take-away for me was that you could improve performance by one order of magnitude over the baseline basically by just doing a good job and avoiding basic mistakes. The resulting code is still well readable and maintainable. In most scenarios, this is where you should stop.
If you want to improve by another order of magnitude (like leaders in the challenge did), code becomes completely non-idiomatic, super-dense, and hard to maintain. You should go there only where it really, really matters, like when building a database kernel for instance. Or well, when trying to win a coding challenge ;)
Some more resources for those interested:
* Blog post with the results: https://www.morling.dev/blog/1brc-results-are-in/
* Show & Tell, featuring implementations in languages other than Java: https://github.com/gunnarmorling/1brc/discussions/categories...
* List of many more blog posts discussing 1BRC in different languages: https://github.com/gunnarmorling/1brc?tab=readme-ov-file#1br...
* 3h deep-dive into the implementation techniques by Thomas Würthinger and Roy van Rijn, two of the top participants of the challenge: https://www.youtube.com/watch?v=_w4-BqeeC0k
> One main take-away for me was that you could improve performance by one order of magnitude over the baseline basically by just doing a good job and avoiding basic mistakes. The resulting code is still well readable and maintainable. In most scenarios, this is where you should stop.
I’ve done a lot of performance work but never heard this expressed so clearly before. Thanks - I’m stealing that.
I’ve found exactly the same thing in my own work optimising text CRDTs. Just writing crdt code in a straightforward, correct way, without unnecessary allocations (and ideally using some good data structures) will get you very, very far. But there’s another order of magnitude available to anyone interested in using exotic, task specific data structures.
I suspect the same is true in just about every field of computer science. Write good code in go / c# / Java / JavaScript / etc and 99% of the time, performance will be just fine. Well, so long as you don’t do anything silly like pull in immutablejs. But there’s usually 10-100x more performance available if you really try.
If you want some examples, I highly recommend Algorithms for Modern Hardware, available for free online:
https://en.algorithmica.org/hpc/
Amazing! I've just asked few secs ago to share more posts discussing 1BRC in different languages, and here we go. Thank you!
As part of learning zig I did an implementation in zig. It got under 1 second within the contest rules on a 6 year old desktop machine. I need to write up a blog when I get the chance.
please share! it's such great challenge to map each language internals. here's an idea how to write up https://r2p.dev/b/2024-03-18-1brc-go/
Besides the standard tricks to speed up the running time like threads and mmap, the biggest speed up is to use a custom hash table that is tuned for the data. I have implemented 5 different versions of hash tables including the Swiss hash table and more recent researched ones. None of them beats a simple linear probing approach. It’s because there’re so few keys, only couple hundreds of them. Tuning a custom hashing function to minimize key collision helps more than anything.
There are also lots of SIMD operations, which Zig has great support in the form of vector programming.
The only thing novel I did was to line up the temperature data from the back instead of from the front for the SIMD operations. That is for temperatures 12.3, 4.5, 67.8, 0.1, I line them up by the last digit after the decimal before packing them into a SIMD register. That way I know their exact positions in the register.
I had similar results exploring hash tables for a nearly identical task here[0].
It seems like swiss tables are tuned for workloads of mostly gets of keys that are not present. Otherwise the two-level lookup is an extra cost with not much benefit.
IIRC Joad Nacer says he is using a hash table for this (again, essentially identical) task on highload.fun[1]. This was sort if surprising for the other competitors because the top couple solutions below that one use some variety of bloom filter instead.
0: https://easyperf.net/blog/2022/05/28/Performance-analysis-an...
1: https://highload.fun/tasks/2/leaderboard
When it comes to custom hash tables, I always remember this video [0] by Strager about implementing the perfect hash table and hash function. Fascinating insights and perf hints.
[0]: https://youtu.be/DMQ_HcNSOAI
Gosh, ant thnx for sharing. Hope this would never become a mainstream during coding interview. Imaging - please list top 10 hash table implementations and its time and space complexity
The main takeaway is that 1 billion rows is just not that much nowadays.
A corollary is 'postgres is really all you need' unless you have irrefutable proof it isn't.
Modern hardware has really impressive speed. A single core of my not-so-new M2 laptop can decompress data at more than 4 GB/s using lz4. Sometimes I wonder where all that performance goes, eg when waiting for Slack to open ;)
> eg when waiting for Slack to open ;)
Run a Wireshark capture while opening slack, those ~30ms round trips add up
Most people seriously underestimate how much power there is in a modern Intel/ARM server.
Many businesses think "we have a lot of products". Nope, even when you have more than a million products we can load all of them into RAM, even with 4 KB of memory allocate to each product.
So a million products is small business from a "load into RAM" perspective. Easily handled on a single server.
It's not that much, but it's also not nothing. The input file of the challenge was very narrow, just a relatively short text column and a numeric value. It still is >13 GB uncompressed; an actual table with a few more columns with 1B rows will easily be beyond 100 GB. And chances are it's not the only table in your database, so it won't fit into RAM of most common database machines. I.e. still a non-trivial dataset in most contexts.
> an actual table with a few more columns with 1B rows will easily be beyond 100 GB
Especially if you need to have supplementary indexes to support common queries on that data.
There should be a book about Java perf / hacks derived from this challenge. I'd buy it fore sure.
I think most of the optimizations were pretty low level and had nothing to do with Java itself. You could implement the same techniques in any other (compiled) language and get similar performance results. Things like cache locality, ILP, work stealing, multithreading, SWAR etc. In fact, the top solutions mostly used unsafe to use off-heap memory to avoid GC and copying of data.
@ThePrimeTimeagen trigger ... BRazil mentioned ...
New Go Billion Row Challenge w/ Great Optimizations | Prime Reacts
- https://www.youtube.com/watch?v=SZ1PDS7iRU8
- https://r2p.dev/b/2024-03-18-1brc-go/
Spoiler:
SwissMap: A smaller, faster Golang Hash Table
- https://www.dolthub.com/blog/2023-03-28-swiss-map/
- https://github.com/dolthub/swiss
Want to see Paul Allen's implementation in Rust, Zig, Mojo ... and even Bun.sh, Deno.com
Twice as slow as top submissions in C# and C++ which is still pretty good for Go but a likely indicator that memory here has become the major bottleneck.
https://hotforknowledge.com/2024/01/13/1brc-in-dotnet-among-...
thanx! 2.9 seconds (4.5 GB/s using .NET 8 on my Xeon Mac)
... previously https://github.com/shraddhaag/1brc
Gunnar is brilliant.
If there must be a cookie popup at all, this site is a great example of how to do it.
(Edit: although it kinda then spoils it with the newsletter popup)