This is why the old-fashioned university course on assembly language is still useful. Writing assembly language (preferably for a less-complex architecture, so the student doesn't get bogged down on minutiae) gives one a gut feeling for how machines work. Running the program on a simulator that optionally pays attention to pipeline and cache misses can help a person understand these issues.
It doesn't matter what architecture one studies, or even a hypothetical one. The last significant application I wrote in assembler was for System/370, some 40 years ago. Yet CPU ISAs of today are not really that different, conceptually.
> Yet CPU ISAs of today are not really that different, conceptually.
CPU true.
GPU no. It's not even the instructions that are different, but I would suggest studying up on GPU loads/stores.
GPUs have fundamentally altered how loads/stores have worked. Yes it's a SIMD load (aka gather operation) which has been around since the 80s. But the routing of that data includes highly optimized broadcast patterns and or butterfly routing or crossbars (which allows for an arbitrary shuffle within log2(n)).
Load(same memory location) across GPU Threads (or SimD lanes) compiles as a single broadcast.
Load(consecutive memory location) across consecutive SIMD lanes is also efficient.
Load(arbitrary) is doable but slower. The crossbar will be taxed.
Do you have any good resources that go into detail on GPU ISAs or GPU architecture? There's certainly a lot available for CPUs, but the resources I’ve found for GPUs mostly focus on how they differ from CPUs and how their ISAs are tailored to the GPU's specific goals.
I assume most people learn microarchitecture for performance reasons.
At which point, the question you are really asking is what aspects of assembly are important for performance.
Answer: there are multiple GPU Matrix Multiplication examples covering channels (especially channel conflicts), load/store alignment, memory movement and more. That should cover the issue I talked about earlier.
Optimization guides help. I know it's 10+ years old, but I think AMDs OpenCL optimization guides was easy to read and follow, and still modern enough to cover most of today's architectures.
Beyond that, you'll have to see conferences about DirectX12 new instructions (wave instructions, ballot/voting, etc. etc) and their performance implications.
It's a mixed bag, everyone knows one or two ways of optimization but learning all of them requires lots of study.
Edit: I should say that Apple also publishes decent stuff. See the link here and the stuff linked at the bottom of the page. But note that now you're in UMA/TBDR territory; discrete GPUs work considerably differently: https://developer.apple.com/videos/play/wwdc2020/10602/
Branch Education apparently decapped and scanned a GA102 (Nvidia 30 series) for the following video: https://www.youtube.com/watch?v=h9Z4oGN89MU. The beginning is very basic, but the content ramps up quickly.
These are mostly internal implementation details, instructions still appear to resolve in order from the outside (with some subtle exceptions for memory read/writes depending on the CPU architecture). It may become important to know such details for performance profiling though.
What has drastically changed is that you cannot do trivial 'cycle counting' anymore.
* `switch` statements can be lowered in two different ways: using a jump table (an indirect branch, only possible when values are adjacent; requires a highly-predictable branch to check the range first), or using a binary search (multiple direct branches). Compilers have heuristics to determine which should be used, but I haven't played with them.
* You may be able to turn an indirect branch into a direct branch using code like the following:
if (function_pointer == expected_function)
expected_function();
else
(*function_pointer)();
* It's generally easy to turn tail recursion into a loop, but it takes effort to design your code to make that possible in the first place. The usual Fibonacci example is a good basic intro; tree-walking is a good piece of homework.
* `cmov` can be harmful (since it has to compute both sides) if branch is even moderately predictable and/or if the less-likely side has too many instructions. That said, from my tests, compilers are still too hesitant to use `cmov` even for cases where yes I really know dammit. OoO CPUs are weird to reason about but I've found that due to dependencies between other instructions, there's often some execution ports to spare for the other side of the branch.
Author here. You can only write so much before you start to lose the audience -- do you believe that anything you mentioned in your list is inherently lacking from my post?
Nice article! Always good to see easy-to-follow explainers on these kinds of concepts!
One minor nit, for the “odd corner case that likely never exists in real code” of taken branches to the next instruction, I can think of at least one example where this is often used: far jumps to the next instruction with a different segment on x86[_64] that are used to reload CS (e.g. on a mode switch).
Aware that’s a very specific case, but it’s one that very much does exist in real code.
Good material, targeted at undergraduate or advanced high school level.
I've been slowly reading Agner Fog's resources. The microarchitecture manual is incredible, and pertinently, the section on branch prediction algorithms I find fascinating:
> A function always has a single entry point in a program (at least, I don’t know of any exceptions to this rule)
We can consider distinct entry points as distinct functions, but it doesn't mean that different functions cannot overlap, sharing code in general and return statements. Feasibility depends on calling conventions, which are outside the topic of the article.
its such a fascinating thing that most people just ignore
i too wrote (using AI) an article on Branch Prediction after i found out that most of my team members only read this in college but never understood
This is why the old-fashioned university course on assembly language is still useful. Writing assembly language (preferably for a less-complex architecture, so the student doesn't get bogged down on minutiae) gives one a gut feeling for how machines work. Running the program on a simulator that optionally pays attention to pipeline and cache misses can help a person understand these issues.
It doesn't matter what architecture one studies, or even a hypothetical one. The last significant application I wrote in assembler was for System/370, some 40 years ago. Yet CPU ISAs of today are not really that different, conceptually.
> Yet CPU ISAs of today are not really that different, conceptually.
CPU true.
GPU no. It's not even the instructions that are different, but I would suggest studying up on GPU loads/stores.
GPUs have fundamentally altered how loads/stores have worked. Yes it's a SIMD load (aka gather operation) which has been around since the 80s. But the routing of that data includes highly optimized broadcast patterns and or butterfly routing or crossbars (which allows for an arbitrary shuffle within log2(n)).
Load(same memory location) across GPU Threads (or SimD lanes) compiles as a single broadcast.
Load(consecutive memory location) across consecutive SIMD lanes is also efficient.
Load(arbitrary) is doable but slower. The crossbar will be taxed.
Do you have any good resources that go into detail on GPU ISAs or GPU architecture? There's certainly a lot available for CPUs, but the resources I’ve found for GPUs mostly focus on how they differ from CPUs and how their ISAs are tailored to the GPU's specific goals.
I assume most people learn microarchitecture for performance reasons.
At which point, the question you are really asking is what aspects of assembly are important for performance.
Answer: there are multiple GPU Matrix Multiplication examples covering channels (especially channel conflicts), load/store alignment, memory movement and more. That should cover the issue I talked about earlier.
Optimization guides help. I know it's 10+ years old, but I think AMDs OpenCL optimization guides was easy to read and follow, and still modern enough to cover most of today's architectures.
Beyond that, you'll have to see conferences about DirectX12 new instructions (wave instructions, ballot/voting, etc. etc) and their performance implications.
It's a mixed bag, everyone knows one or two ways of optimization but learning all of them requires lots of study.
Unfortunately this is a topic that isn't open enough, and architectures change rather quickly so you're always chasing the rabbit. That being said:
RDNA architecture (a few gens old) slides has some breadcrumbs: https://gpuopen.com/download/RDNA_Architecture_public.pdf
AMD also publishes its ISAs, but I don't think you'll be able to extract much from a reference-style document: https://gpuopen.com/amd-gpu-architecture-programming-documen...
Books on CUDA/HIP also go into some detail of the underlying architecture. Some slides from NV:
https://gfxcourses.stanford.edu/cs149/fall21content/media/gp...
Edit: I should say that Apple also publishes decent stuff. See the link here and the stuff linked at the bottom of the page. But note that now you're in UMA/TBDR territory; discrete GPUs work considerably differently: https://developer.apple.com/videos/play/wwdc2020/10602/
If anyone has more suggestions, please share.
Branch Education apparently decapped and scanned a GA102 (Nvidia 30 series) for the following video: https://www.youtube.com/watch?v=h9Z4oGN89MU. The beginning is very basic, but the content ramps up quickly.
ISAs have not changed, sure. Microarchitectures are completely different and basically no school is going to teach you anything useful for that.
I don't think we had out of order designs with speculative execution 40 years ago? That seems like a pretty huge change.
These are mostly internal implementation details, instructions still appear to resolve in order from the outside (with some subtle exceptions for memory read/writes depending on the CPU architecture). It may become important to know such details for performance profiling though.
What has drastically changed is that you cannot do trivial 'cycle counting' anymore.
Not to step on your toes, but it shall be said that instructions in a CPU "retire" in order.
They don't even always do that anymore.
Depends what you mean by "retire" but by the normal definition they always retire in order, even in OoO CPUs. You might be thinking of writeback.
Did you teach the UBC CS systems programming course in 1985?
Intro CompE class does a good bit for mechanical sympathy as well.
Decent intro, though nothing new.
A couple useful points it lacks:
* `switch` statements can be lowered in two different ways: using a jump table (an indirect branch, only possible when values are adjacent; requires a highly-predictable branch to check the range first), or using a binary search (multiple direct branches). Compilers have heuristics to determine which should be used, but I haven't played with them.
* You may be able to turn an indirect branch into a direct branch using code like the following:
* It's generally easy to turn tail recursion into a loop, but it takes effort to design your code to make that possible in the first place. The usual Fibonacci example is a good basic intro; tree-walking is a good piece of homework.* `cmov` can be harmful (since it has to compute both sides) if branch is even moderately predictable and/or if the less-likely side has too many instructions. That said, from my tests, compilers are still too hesitant to use `cmov` even for cases where yes I really know dammit. OoO CPUs are weird to reason about but I've found that due to dependencies between other instructions, there's often some execution ports to spare for the other side of the branch.
Author here. You can only write so much before you start to lose the audience -- do you believe that anything you mentioned in your list is inherently lacking from my post?
Cool trick with the function pointer comparison!
Nice article! Always good to see easy-to-follow explainers on these kinds of concepts!
One minor nit, for the “odd corner case that likely never exists in real code” of taken branches to the next instruction, I can think of at least one example where this is often used: far jumps to the next instruction with a different segment on x86[_64] that are used to reload CS (e.g. on a mode switch).
Aware that’s a very specific case, but it’s one that very much does exist in real code.
Author here. I'll work this in. Thank you.
Good material, targeted at undergraduate or advanced high school level.
I've been slowly reading Agner Fog's resources. The microarchitecture manual is incredible, and pertinently, the section on branch prediction algorithms I find fascinating:
https://web.archive.org/web/20250611003116/https://www.agner...
> A function always has a single entry point in a program (at least, I don’t know of any exceptions to this rule)
We can consider distinct entry points as distinct functions, but it doesn't mean that different functions cannot overlap, sharing code in general and return statements. Feasibility depends on calling conventions, which are outside the topic of the article.
Weird cookie policy on that blog?
What's weird about it? It's the standard Wordpress cookie policy.
I couldn't choose like most sites
I clicked "learn more" and then I got a "disagree" button. Not really the most intuitive flow but it's there...
Weird, I'll investigate tomorrow, thank you.
its such a fascinating thing that most people just ignore i too wrote (using AI) an article on Branch Prediction after i found out that most of my team members only read this in college but never understood