I remember hearing somebody talk about programming hot loops in either the the PS3 or PS2 in Excel, to get a good handle on the concurrency question by having assembler in multiple columns next to each other
That would be the PS2’s VUs which had an upper and lower pipe and it was easier to write instructions for each in separate columns. Then in one SDK we received program called vcl which took a single list of instructions, doing all the pipelining for you, as well as optimizing loops and assigning registers automatically. It was a godsend.
I remember discussion at the time about how the PS3 was a uniquely difficult architecture to emulate. Was that true? Have those difficulties now been overcome? I see RPCS3 exists but I’ve no idea if it has done the difficult parts.
Depends on your definition of "overcome". RPCS3 does emulate the architecture, and many games are playable on it, but it's still far from being perfect. Many games have stability issues due to timing/synchronization inaccuracies, for example.
I believe the PS3 was designed to make it difficult to emulate.
Why go through the pain to design such thing, which makes it difficult for developers, while I don't think it would really result in better performance.
So, I'd have to dig through some older notes I have, however, some of this information seems inaccurate based upon my own interpretation of the specs (and writing code...specifically, but not limited to, the PowerPC part). A suggestion from me is to provide sources, and also maybe an epub of this.
It's kinda a neat idea for a fixed target CPU like on a games console, but for a general purpose CPU range you generally don't want to reveal too much behind the curtain like that, what if you did a new model with a bigger scratchpad? Would existing software just ignore it? Or a budget model with less? Do you have a crash, or just an slow fallback? The system where the CPU magically makes cache work is better when you're in a situation where the CPU and software problem aren't fixed.
Pure speculation from my side, but I'd think that the advantages over traditional big register banks and on-chip caches are not that great, especially when you're writing 'cache-aware code'. You also need to consider that the PS3 was full of design compromises to keep cost down, e.g. there simply might not have been enough die space for a cache controller for each SPU, or the die space was more vaulable to get a few more kilobytes of static scratch memory instead of the cache logic.
Also, AFAIK on some GPU architectures you have something similar like per-core static scratch space, that's where restrictions are coming from that uniform data per shader invocation may at most be 64 KBytes on some GPU architectures, etc...
The TI-99/4A had 256 BYTES (128 words) of static RAM available to the CPU. All accesses the 16K of main memory had to be done through the video chip. This made a lot of things on the TI-99/4A slow, but there were occasional bits of brilliance where you see a tiny bit of the system it could've been. Thanks to the fast SRAM and 16-bit CPU, the smooth scrolling in Parsec was done entirely in software—the TMS9918A video chip lacking scroll registers entirely.
On the PS2 there was a very small memory area, called the scratchpad, that was very quick to access, the rough idea on the PS2 was to DMA data in and out of the scratch pad, and then do work in the data, without creating contention with everything else going on at the same time.
In general most developers struggled to do much with it, it was just too small (combined with the fiddlyness of using it).
PS2 programmer's were very used to thinking in this way as it's how the rendering had to be done. There is a couple of vector units, and one of them is connected to the GPU, so the general structure most developers followed was to have 4 buffers in the VU memory (I think it only had 16kb of memory or something pretty small), but essentially in parallel you'd have:
1. New data being DMAd in from main memory to VU memory (into say buffer 1/4).
2. Previous data in buffer 3/4 being transformed, lit, coloured, etc and output into buffer 4/4.
3. Data from buffer 2/4 being sent/rendered by the GPU.
Then once the above had finished it would flip, so you'd alternate like:
Data in: B1 (main memory to VU)
Data out: B2 (VU to GPU)
Data process from: B3 (VU processing)
Data process to: B4 (VU processing)
Data in: B3
Data out: B4
Data process from: B1
Data process to: B2
The VU has two pipelines running in parallel (float and integer), and every instruction had an exact number of cycles to process, if you read a result before it is ready you stall the pipeline, so you had to painstakingly interleave and order your instructions to process three verts at a time and be very clever about register pressure etc.
There is obviously some clever syncing logic to allow all of this to work, allowing the DMA to wait until the VU kicks off the next GPU batch etc.
It was complex to get your head around, set up all the moving parts and debug when it goes wrong. When it goes wrong it pretty much just hangs, so you had to write a lot of validators. On PS2 you basically spend the frame building up a huge DMA list, and then at the end of the frame kick it off and it renders everything, so the DMA will transfer VU programs to the VU, upload data to the VU, wait for it to process and upload next batch, at the end upload next program, upload settings to GPU registers, bacially everything. Once that DMA is kicked off no more CPU code is involved in rendering the frame, so you have a MB or so of pure memory transfer instructions firing off, if any of them are wrong you are in a world of pain.
Then throw in, just to keep things interesting, the fact that anything you write to memory is likely stuck in caches, and DMA doesn't seem caches, so extra care has to be taken to make sure caches are flushed before using DMA.
It was a magical, horrible, wonderful, painful, joyous, impossible, satisfying, sickening, amazing time.
> The EIB is made of twelve nodes called Ramps, each one connecting one component of Cell... Having said that, instead of recurring to single bus topologies (like the Emotion Engine and its precursor did), ramps are inter-connected following the token ring topology, where data packets must cross through all neighbours until it reaches the destination (there’s no direct path).
I knew IBM was involved in the design of the Cell BE, but I had no idea some successor of IBM's token ring tech (at least the concept of it) lived on in it. I'm sure there's other hardware (probably mainframe hardware) in and before that 2006 with similar interconnects.
The PS3 was used a few time in clusters – some NN work was done on it back in the day. My understanding (somewhat echoed in TFA) is that when programming Cell, you really needed to think about communication patterns to avoid quickly running into memory bandwidth limitations, especially given memory hierarchy and bus quirks.
For it's day, it packed a lot of compute into cheap package, so long as you could do something useful with a data set that fit into 256kB, the size of the local memory buffer on each SPE. If you overflowed that, the anemic system bandwidth would make it suck. Protein folding was an example of a problem that back then used tons of compute but could be fit into small space.
It was the biggest contributor to folding @ home at one point. It came bundled with the PS3 and played relaxing music and showed a heat map of the world ps3 compute nodes as it went on. There was also https://en.wikipedia.org/wiki/PlayStation_3_cluster
With enough effort you could definitely do it. Just remember it is a device that came out in 2006 and it has 256MB of system RAM and 256MB of VRAM, at best you're running a quite small model after a lot work trying to port some inference code to CELL processors. Honestly it does sound a cool excuse to write code for the CELL processors, but don't expect amazing performance or anything.
It's a nearly 20 year old gaming console. Even if you could port a deep learning workload to run efficiently on the Cell architecture, it would be thoroughly outclassed by a modern cell phone (to say nothing of a desktop computer).
The PS3 only had 256mb of main memory so you'd be pretty limited there. Memory bandwidth, great at the time, is pretty poor by today's standards (25 gb/s)
I remember hearing somebody talk about programming hot loops in either the the PS3 or PS2 in Excel, to get a good handle on the concurrency question by having assembler in multiple columns next to each other
That would be the PS2’s VUs which had an upper and lower pipe and it was easier to write instructions for each in separate columns. Then in one SDK we received program called vcl which took a single list of instructions, doing all the pipelining for you, as well as optimizing loops and assigning registers automatically. It was a godsend.
I can't remember the details because we coded the SPU in C, but the PS3 SPUs had odd and even cycles with different access properties too.
Sounds like a Gantt chart with code might fit.
I love those
This is totally strange. I just got interested in the architecture of PS3 and its emulators (on Android too) and now there is article on HN...
PS3 would have done better as a gaming console if the architecture wasn't so hard to program and wasn't forced to be a trojan blue-ray player.
I wonder if that architecture was designed to prevent emulation.
Because emulators still work insanely hard to make those games work, even today.
Doubt it. Avoiding jailbreak sure to keep selling games, but no one cares about emulators.
I remember discussion at the time about how the PS3 was a uniquely difficult architecture to emulate. Was that true? Have those difficulties now been overcome? I see RPCS3 exists but I’ve no idea if it has done the difficult parts.
With sufficient thrust, pigs fly just fine. Eventually you can overcome any issues by throwing more CPU at the problem
Depends on your definition of "overcome". RPCS3 does emulate the architecture, and many games are playable on it, but it's still far from being perfect. Many games have stability issues due to timing/synchronization inaccuracies, for example.
I think those timing issues are part of what I imagine the difficult part to be.
I believe the PS3 was designed to make it difficult to emulate.
Why go through the pain to design such thing, which makes it difficult for developers, while I don't think it would really result in better performance.
So, I'd have to dig through some older notes I have, however, some of this information seems inaccurate based upon my own interpretation of the specs (and writing code...specifically, but not limited to, the PowerPC part). A suggestion from me is to provide sources, and also maybe an epub of this.
Please see this: https://github.com/flipacholas/Architecture-of-consoles
> A suggestion from me is to provide sources, and also maybe an epub of this
What do you mean?
It seems they missed this. https://payhip.com/copetti
That was a small fundraiser started to convert all articles into epubs, finished in 2022
i did a bit dev on ps3 and i remember there was a small memory on the chip, like 256k that was accessible to programmer.
i always found this very appealing, having a blazing fast memory under programmer control so i wonder: why don't we have that on other cpus?
It's kinda a neat idea for a fixed target CPU like on a games console, but for a general purpose CPU range you generally don't want to reveal too much behind the curtain like that, what if you did a new model with a bigger scratchpad? Would existing software just ignore it? Or a budget model with less? Do you have a crash, or just an slow fallback? The system where the CPU magically makes cache work is better when you're in a situation where the CPU and software problem aren't fixed.
> why don't we have that on other cpus
Pure speculation from my side, but I'd think that the advantages over traditional big register banks and on-chip caches are not that great, especially when you're writing 'cache-aware code'. You also need to consider that the PS3 was full of design compromises to keep cost down, e.g. there simply might not have been enough die space for a cache controller for each SPU, or the die space was more vaulable to get a few more kilobytes of static scratch memory instead of the cache logic.
Also, AFAIK on some GPU architectures you have something similar like per-core static scratch space, that's where restrictions are coming from that uniform data per shader invocation may at most be 64 KBytes on some GPU architectures, etc...
Sounds a little like the 10MB of EDRAM on the Xbox 360, although I think it was only accessible by the GPU.
https://en.wikipedia.org/wiki/Xbox_360_technical_specificati...
We call it "cache" don't we these days? And they've become massive - Apple M series and AMX Strix series have 24/32MB of L3 cache.
This is where a lot of their performance comes from.
> why don't we have that on other cpus?
We do, it's called "cache" or "registers".
The TI-99/4A had 256 BYTES (128 words) of static RAM available to the CPU. All accesses the 16K of main memory had to be done through the video chip. This made a lot of things on the TI-99/4A slow, but there were occasional bits of brilliance where you see a tiny bit of the system it could've been. Thanks to the fast SRAM and 16-bit CPU, the smooth scrolling in Parsec was done entirely in software—the TMS9918A video chip lacking scroll registers entirely.
On the PS2 there was a very small memory area, called the scratchpad, that was very quick to access, the rough idea on the PS2 was to DMA data in and out of the scratch pad, and then do work in the data, without creating contention with everything else going on at the same time.
In general most developers struggled to do much with it, it was just too small (combined with the fiddlyness of using it).
PS2 programmer's were very used to thinking in this way as it's how the rendering had to be done. There is a couple of vector units, and one of them is connected to the GPU, so the general structure most developers followed was to have 4 buffers in the VU memory (I think it only had 16kb of memory or something pretty small), but essentially in parallel you'd have:
1. New data being DMAd in from main memory to VU memory (into say buffer 1/4). 2. Previous data in buffer 3/4 being transformed, lit, coloured, etc and output into buffer 4/4. 3. Data from buffer 2/4 being sent/rendered by the GPU.
Then once the above had finished it would flip, so you'd alternate like:
Data in: B1 (main memory to VU) Data out: B2 (VU to GPU) Data process from: B3 (VU processing) Data process to: B4 (VU processing)
Data in: B3 Data out: B4 Data process from: B1 Data process to: B2
The VU has two pipelines running in parallel (float and integer), and every instruction had an exact number of cycles to process, if you read a result before it is ready you stall the pipeline, so you had to painstakingly interleave and order your instructions to process three verts at a time and be very clever about register pressure etc.
There is obviously some clever syncing logic to allow all of this to work, allowing the DMA to wait until the VU kicks off the next GPU batch etc.
It was complex to get your head around, set up all the moving parts and debug when it goes wrong. When it goes wrong it pretty much just hangs, so you had to write a lot of validators. On PS2 you basically spend the frame building up a huge DMA list, and then at the end of the frame kick it off and it renders everything, so the DMA will transfer VU programs to the VU, upload data to the VU, wait for it to process and upload next batch, at the end upload next program, upload settings to GPU registers, bacially everything. Once that DMA is kicked off no more CPU code is involved in rendering the frame, so you have a MB or so of pure memory transfer instructions firing off, if any of them are wrong you are in a world of pain.
Then throw in, just to keep things interesting, the fact that anything you write to memory is likely stuck in caches, and DMA doesn't seem caches, so extra care has to be taken to make sure caches are flushed before using DMA.
It was a magical, horrible, wonderful, painful, joyous, impossible, satisfying, sickening, amazing time.
> The EIB is made of twelve nodes called Ramps, each one connecting one component of Cell... Having said that, instead of recurring to single bus topologies (like the Emotion Engine and its precursor did), ramps are inter-connected following the token ring topology, where data packets must cross through all neighbours until it reaches the destination (there’s no direct path).
I knew IBM was involved in the design of the Cell BE, but I had no idea some successor of IBM's token ring tech (at least the concept of it) lived on in it. I'm sure there's other hardware (probably mainframe hardware) in and before that 2006 with similar interconnects.
The EIB has nothing to do with 1980s Token Ring and this is arguably a mistake in the article. It's just a ring topology.
I suspect it’s an attempt at a metaphor that isn’t clearly marked as such.
Can it run deep learning workloads?
The PS3 was used a few time in clusters – some NN work was done on it back in the day. My understanding (somewhat echoed in TFA) is that when programming Cell, you really needed to think about communication patterns to avoid quickly running into memory bandwidth limitations, especially given memory hierarchy and bus quirks.
https://open.clemson.edu/all_theses/629/
For a while, it was a major player in protein folding. I remember the PS3 was particularly apt at doing that sort of work.
For it's day, it packed a lot of compute into cheap package, so long as you could do something useful with a data set that fit into 256kB, the size of the local memory buffer on each SPE. If you overflowed that, the anemic system bandwidth would make it suck. Protein folding was an example of a problem that back then used tons of compute but could be fit into small space.
It was the biggest contributor to folding @ home at one point. It came bundled with the PS3 and played relaxing music and showed a heat map of the world ps3 compute nodes as it went on. There was also https://en.wikipedia.org/wiki/PlayStation_3_cluster
they've also been used for crypto mining/cracking
See also QPACE https://en.wikipedia.org/wiki/QPACE
With enough effort you could definitely do it. Just remember it is a device that came out in 2006 and it has 256MB of system RAM and 256MB of VRAM, at best you're running a quite small model after a lot work trying to port some inference code to CELL processors. Honestly it does sound a cool excuse to write code for the CELL processors, but don't expect amazing performance or anything.
It's a nearly 20 year old gaming console. Even if you could port a deep learning workload to run efficiently on the Cell architecture, it would be thoroughly outclassed by a modern cell phone (to say nothing of a desktop computer).
Eugh, maybe?
The PS3 only had 256mb of main memory so you'd be pretty limited there. Memory bandwidth, great at the time, is pretty poor by today's standards (25 gb/s)
https://en.wikipedia.org/wiki/PlayStation_3_cluster