Interesting detail I didn't know about, and different to my initial interpretation too.
It doesn't seem too unreasonable to call the program counter, memory registers, etc. "special registers" since they have specific functions beyond simply storing data. Therefore I could imagine calling the aggregate of such registers in a CPU a "special register group". Perhaps those still using the term are following this line of thought.
In that they're not the GPRs (General Purpose Registers) this kinda sorta makes sense. Mostly the problem is that this terminology isn't actually used this way. Notice how the CPU (which we're thinking of as one small component of e.g. a laptop or phone, but the originators were imagining a furniture sized piece of equipment which is responsible for the actual computing) is supposed to be comprised of these particular elements but not GPRs, not an FPU, or an MMU, or any other elements which are in fact typical today.
So this sort of "quiz" is bogus because it's basically checking whether you've rote memorized some particular words. People who know how a CPU works are confused and likely don't pass, those who've memorized the words but lack all understanding pass. This is not a useful education.
This was my thought as well. Any sufficiently complex modern CPU contains some register that it expects some bit to be set in to enable something like a powersaving mode with an interrupt mask in the lower bits to turn it off. Or something equally esoteric, but purposeful once you consider the application.
My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.
Given the relative scarcity of "general purpose" registers on x86 32-bit CPUs, you could actually argue those are the special purpose registers.
> My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.
Not precisely. AVR, like most embedded architectures, has a bunch of I/O registers which control CPU and peripheral behavior - but those are distinct from the CPU's general-purpose registers. The I/O registers exist in the same address space as main memory, and can only be accessed as memory. You can write an instruction like "increment general-purpose register R3", but you can't use that same syntax for e.g. "increment the UART baud rate register"; you have to load a value from that register to a GPR, increment it there, and store it back to the I/O register.
AVR is a bit weird in that the CPU's general-purpose registers are also mapped to main memory and can be accessed as if they were memory - but that functionality is rarely used in practice.
Getting back to your original point, x86 does have special-purpose registers - lots of them, in fact. They're accessed using the privileged rdmsr/wrmsr instructions.
The Honeywell's "special register groups" were rather unusual, since the processor switched tasks and registers after every instruction. (This is more commonly known as a barrel processor.) I don't see an evolutionary path to register files, windows, banks, etc, although the concept is a bit similar. Rather, I think the concept of having separate sets of registers developed independently several times.
That hardware multiprogramming concept showed up a few times. The CDC 6600 pretended to have 10 peripheral processors for I/O using that approach.
National Semiconductor (I think) once made a part which had two BASIC interpreters on one CPU with two sets of registers.
TI had some IC where the registers were in main memory, with a hardware register to point to them. Context switching didn't require a register save, just pointing the register pointer somewhere else. There were stack machines such as the B5500 where the top items on the stack were in hardware registers but the rest of the stack was in memory. Floating point in the early years of floating point co-processor chips had lots of rather special registers.
Getting data in and out of them was sometimes quite slow, so it was desirable to get data into the math co-processor and work on it there without bringing it back into memory if at all possible.
Lots of variations on this theme. There's a lot going on with special-purpose registers today inside superscalar processors, but the programmer can't see most of it.
Are you thinking of TI's TMS9900 processor, which had registers in main memory? This processor was used in TI's TI-99/4 home computer. (Both the processor and the home computer were big failures.)
I love weird jargon. There's plenty of it in the terms we normally use, like despite ROM being a type of RAM, everyone knows when you say RAM you are excluding ROM.
One of my favorites is BLDC which stands for BrushLess Direct Current, which is used to describe motors which are driven by an alternating current at a variable frequency, not to be confused with VFD, which in this case stands for Variable Frequency Drive, and describes how BLDC actually functions, and is on rare occasions used interchangably, but is most often used with analog or open-loop controllers, while BLCD is used for VFDs that are digitally controlled. BLDCs are Brushless not Brush Less and, AC not DC. I like to think the acronym stands for Bitwise Logic Driven Current, which follows the acronym, correctly describes how it works, and differentiates it from non-BLDC VFDs. Also, while VFD can mean Variable Frequency Drive, it is also used to mean Vacuum Fluorescent Display, and for Lemony Snicket fans it can me a whole lot more. (https://snicket.fandom.com/wiki/V.F.D._%28disambiguation%29)
Uncommon jargon is even better, especially when it comes out of marketing departments and describes something ordinary. I have a rice cooker that has terms like "micom" and "fuzzy logic" on the labels. They describe microcontrollers and variables, respectively, which are found in all but the simplest electromechanical appliances.
It's normal for companies to trademark their names for common technology, like Nvidia and AMD trademarking their implementation of variable framerate as G-Sync and FreeSync, respectively, and some companies are more agressive with their trademarks than others, such as Intel trademarking their symetric multithreading as Hyper-Threading, but AMD just calling it symetric multithreading.
Cessna used to have a bunch of random trademarked jargon in their ads in the 70's, from trademarking flaps to back seats.
Your comment reminds me of a couple things. On the topic of RAM, I was recently researching the vintage IBM 650 computer (1953). You could get it with Random Access Memory (RAM), but this RAM was actually a hard disk. I guess IBM considered the disk to be more random access than the rotating drum that provided the 650's main storage. One strange thing is that the disk had three independent read/write arms. Unlike modern disks, the arm could only access one platter at a time and had to physically retract to move to a different platter. Having three independent arms let you overlap reads and seeks.
As far as "fuzzy logic", there's a whole lot of history behind how that ended up in your rice cooker. In the 1990s, fuzzy logic was a big academic research area that was supposed to revolutionize control systems. The basic idea was to extend Boolean logic to support in-between values, and there was a lot of mathematics behind this, with books, journals, conferences, and everything. Fuzzy logic didn't live up to the hype, and mostly disappeared. But fuzzy logic did end up being used in rice cookers, adjusting the temperature based on how cooked the rice was, instead of just being on or off.
The 650 main memory was a drum; but what IBM called Random Access Memory (and RAM) for this machine was a hard drive. As described in the Manual of Operation linked above. Here are a few quotes:
"Records in the IBM Random Access Memory Unit are stored on the faces of magnetic disks."
"The stored data in the Random Access Memory Unit are read and written by access arms."
"The IBM 355 RAM units provide extemely large storage capacity for data... Up to four RAM units can be attached to the 650 to provide 24,000,000 digits of RAM storage."
The main memory on the other hand:
"The 20,000 digits of storage, arranges as 2000 words of memory on the magnetic drum..."
Actually, no :-) The 650 had a drum for main storage. The "355 Random Access Memory" was something different, an optional hard drive peripheral, which IBM literally called "RAM" (see the link above). This became the RAMAC system.
Right. For a few years, 3-phase motors used as controlled servos were called "brushless DC" below about 1 HP, and "variable frequency drive" above 1 HP. Now everybody admits they're really all 3-phase synchronous motors.
Can of worms. My understanding is that VFD refers to any control system capable of varying speed and torque, usually through varying the supply frequency and voltage to the coils of an asynchronous AC induction motor. However, it is important to note that a VFD can also control synchronous motors such as BLDC and Permanent Magnet Synchronous Motors (PMSM), although in practice the term is usually applied to control systems for high power industrial AC asynchronous induction motors. It would therefore be incorrect to state "they're really all 3-phase synchronous motors", although some VFD control systems could be seen to emulate synchronous motors with asynchronous motors.
Since the jargon we've invented in technology has derived from natural language, it's often repurposing common terms as terms of art. In my opinion this leads to ambiguity and I sometimes pine for the abstruse but more precise jargon from classical languages you can use in medicine (for example).
For example, how many things does "link" mean? "Process"? "Type"? "Local"? It makes people (e.g., non-technical people) think that they understand what I mean when I talk about these things but sometimes they do and sometimes they don't. Sometimes we use it in a colloquial sense, but sometimes we'd like to use it in a strict technical sense. Sometimes we can invent a new, precise term like "hyperlink" or "codec" but as often as not it fails to gain traction ("hyperlink" is outdated).
That's one reason we get a lot of acronyms, too. They're too unconversational but they can at least signal we're talking about something specific and rigorous rather than loose.
Interesting detail I didn't know about, and different to my initial interpretation too.
It doesn't seem too unreasonable to call the program counter, memory registers, etc. "special registers" since they have specific functions beyond simply storing data. Therefore I could imagine calling the aggregate of such registers in a CPU a "special register group". Perhaps those still using the term are following this line of thought.
[delayed]
In that they're not the GPRs (General Purpose Registers) this kinda sorta makes sense. Mostly the problem is that this terminology isn't actually used this way. Notice how the CPU (which we're thinking of as one small component of e.g. a laptop or phone, but the originators were imagining a furniture sized piece of equipment which is responsible for the actual computing) is supposed to be comprised of these particular elements but not GPRs, not an FPU, or an MMU, or any other elements which are in fact typical today.
So this sort of "quiz" is bogus because it's basically checking whether you've rote memorized some particular words. People who know how a CPU works are confused and likely don't pass, those who've memorized the words but lack all understanding pass. This is not a useful education.
This was my thought as well. Any sufficiently complex modern CPU contains some register that it expects some bit to be set in to enable something like a powersaving mode with an interrupt mask in the lower bits to turn it off. Or something equally esoteric, but purposeful once you consider the application.
My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.
Given the relative scarcity of "general purpose" registers on x86 32-bit CPUs, you could actually argue those are the special purpose registers.
> My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.
Not precisely. AVR, like most embedded architectures, has a bunch of I/O registers which control CPU and peripheral behavior - but those are distinct from the CPU's general-purpose registers. The I/O registers exist in the same address space as main memory, and can only be accessed as memory. You can write an instruction like "increment general-purpose register R3", but you can't use that same syntax for e.g. "increment the UART baud rate register"; you have to load a value from that register to a GPR, increment it there, and store it back to the I/O register.
AVR is a bit weird in that the CPU's general-purpose registers are also mapped to main memory and can be accessed as if they were memory - but that functionality is rarely used in practice.
Getting back to your original point, x86 does have special-purpose registers - lots of them, in fact. They're accessed using the privileged rdmsr/wrmsr instructions.
Author here if anyone has questions. This was discussed on HN a while ago: https://news.ycombinator.com/item?id=21333245
Did register groups evolve into what would become register files, register windows, and register banks?
The Honeywell's "special register groups" were rather unusual, since the processor switched tasks and registers after every instruction. (This is more commonly known as a barrel processor.) I don't see an evolutionary path to register files, windows, banks, etc, although the concept is a bit similar. Rather, I think the concept of having separate sets of registers developed independently several times.
That hardware multiprogramming concept showed up a few times. The CDC 6600 pretended to have 10 peripheral processors for I/O using that approach. National Semiconductor (I think) once made a part which had two BASIC interpreters on one CPU with two sets of registers.
TI had some IC where the registers were in main memory, with a hardware register to point to them. Context switching didn't require a register save, just pointing the register pointer somewhere else. There were stack machines such as the B5500 where the top items on the stack were in hardware registers but the rest of the stack was in memory. Floating point in the early years of floating point co-processor chips had lots of rather special registers. Getting data in and out of them was sometimes quite slow, so it was desirable to get data into the math co-processor and work on it there without bringing it back into memory if at all possible.
Lots of variations on this theme. There's a lot going on with special-purpose registers today inside superscalar processors, but the programmer can't see most of it.
Are you thinking of TI's TMS9900 processor, which had registers in main memory? This processor was used in TI's TI-99/4 home computer. (Both the processor and the home computer were big failures.)
https://spectrum.ieee.org/the-inside-story-of-texas-instrume...
Yes. I've used a Marinchip 9900, from John Walker's company before Autodesk.
In case anybody else was curious: https://www.fourmilab.ch/documents/marinchip/boards/
I wonder if your article will revive the popularity of the phrase. https://books.google.com/ngrams/graph?content=special+regist...
Discussed at the time (of the article):
“Special register groups” invaded computer dictionaries for decades - https://news.ycombinator.com/item?id=21333245 - Oct 2019 (60 comments)
I love weird jargon. There's plenty of it in the terms we normally use, like despite ROM being a type of RAM, everyone knows when you say RAM you are excluding ROM.
One of my favorites is BLDC which stands for BrushLess Direct Current, which is used to describe motors which are driven by an alternating current at a variable frequency, not to be confused with VFD, which in this case stands for Variable Frequency Drive, and describes how BLDC actually functions, and is on rare occasions used interchangably, but is most often used with analog or open-loop controllers, while BLCD is used for VFDs that are digitally controlled. BLDCs are Brushless not Brush Less and, AC not DC. I like to think the acronym stands for Bitwise Logic Driven Current, which follows the acronym, correctly describes how it works, and differentiates it from non-BLDC VFDs. Also, while VFD can mean Variable Frequency Drive, it is also used to mean Vacuum Fluorescent Display, and for Lemony Snicket fans it can me a whole lot more. (https://snicket.fandom.com/wiki/V.F.D._%28disambiguation%29)
Uncommon jargon is even better, especially when it comes out of marketing departments and describes something ordinary. I have a rice cooker that has terms like "micom" and "fuzzy logic" on the labels. They describe microcontrollers and variables, respectively, which are found in all but the simplest electromechanical appliances.
It's normal for companies to trademark their names for common technology, like Nvidia and AMD trademarking their implementation of variable framerate as G-Sync and FreeSync, respectively, and some companies are more agressive with their trademarks than others, such as Intel trademarking their symetric multithreading as Hyper-Threading, but AMD just calling it symetric multithreading.
Cessna used to have a bunch of random trademarked jargon in their ads in the 70's, from trademarking flaps to back seats.
Your comment reminds me of a couple things. On the topic of RAM, I was recently researching the vintage IBM 650 computer (1953). You could get it with Random Access Memory (RAM), but this RAM was actually a hard disk. I guess IBM considered the disk to be more random access than the rotating drum that provided the 650's main storage. One strange thing is that the disk had three independent read/write arms. Unlike modern disks, the arm could only access one platter at a time and had to physically retract to move to a different platter. Having three independent arms let you overlap reads and seeks.
As far as "fuzzy logic", there's a whole lot of history behind how that ended up in your rice cooker. In the 1990s, fuzzy logic was a big academic research area that was supposed to revolutionize control systems. The basic idea was to extend Boolean logic to support in-between values, and there was a lot of mathematics behind this, with books, journals, conferences, and everything. Fuzzy logic didn't live up to the hype, and mostly disappeared. But fuzzy logic did end up being used in rice cookers, adjusting the temperature based on how cooked the rice was, instead of just being on or off.
[1] https://bitsavers.org/pdf/ibm/650/22-6270-1_RAM.pdf
[2] https://en.wikipedia.org/wiki/Fuzzy_logic
On the topic of RAM, I was recently researching the vintage IBM 650 computer (1953) ... but this RAM was actually a hard disk.
Actually it was a drum. No moving arms at all, but some tracks had multiple read/write heads to reduce access time.
Knuth's high school science fair project was to write an angular optimizing assembler for the IBM 650, called SOAP III.
The 650 main memory was a drum; but what IBM called Random Access Memory (and RAM) for this machine was a hard drive. As described in the Manual of Operation linked above. Here are a few quotes:
"Records in the IBM Random Access Memory Unit are stored on the faces of magnetic disks."
"The stored data in the Random Access Memory Unit are read and written by access arms."
"The IBM 355 RAM units provide extemely large storage capacity for data... Up to four RAM units can be attached to the 650 to provide 24,000,000 digits of RAM storage."
The main memory on the other hand: "The 20,000 digits of storage, arranges as 2000 words of memory on the magnetic drum..."
Actually, no :-) The 650 had a drum for main storage. The "355 Random Access Memory" was something different, an optional hard drive peripheral, which IBM literally called "RAM" (see the link above). This became the RAMAC system.
Right, you could add a RAMAC disk, but that was a very expensive option. Many universities had IBM 650s, but usually without the RAMAC.
Right. For a few years, 3-phase motors used as controlled servos were called "brushless DC" below about 1 HP, and "variable frequency drive" above 1 HP. Now everybody admits they're really all 3-phase synchronous motors.
Can of worms. My understanding is that VFD refers to any control system capable of varying speed and torque, usually through varying the supply frequency and voltage to the coils of an asynchronous AC induction motor. However, it is important to note that a VFD can also control synchronous motors such as BLDC and Permanent Magnet Synchronous Motors (PMSM), although in practice the term is usually applied to control systems for high power industrial AC asynchronous induction motors. It would therefore be incorrect to state "they're really all 3-phase synchronous motors", although some VFD control systems could be seen to emulate synchronous motors with asynchronous motors.
Since the jargon we've invented in technology has derived from natural language, it's often repurposing common terms as terms of art. In my opinion this leads to ambiguity and I sometimes pine for the abstruse but more precise jargon from classical languages you can use in medicine (for example).
For example, how many things does "link" mean? "Process"? "Type"? "Local"? It makes people (e.g., non-technical people) think that they understand what I mean when I talk about these things but sometimes they do and sometimes they don't. Sometimes we use it in a colloquial sense, but sometimes we'd like to use it in a strict technical sense. Sometimes we can invent a new, precise term like "hyperlink" or "codec" but as often as not it fails to gain traction ("hyperlink" is outdated).
That's one reason we get a lot of acronyms, too. They're too unconversational but they can at least signal we're talking about something specific and rigorous rather than loose.