The video is really well done and interesting, and makes you think the right way about sub-pixels. Though it does seem a little amusing to me that the process Japhy used is basically exactly the same process the display is already using for antialiasing fonts, he’s just doing it manually. We already have sub-pixel art, pretty much all the time. ;) I haven’t tried, but in theory the hidden message thing at the end can be done purely by reducing your font size to sub 1-pt.
There was also a period in the 2000s where icon graphics were designed with subpixel effects in mind. This was when LCDs displaced CRTs (previously the design took the blurriness of CRTs into account) up until to when the mobile revolution and high-DPI displays started favoring a more scalable solution (at the cost of less nice icons).
Apple II (1977) did an early version of this; it essentially had purple and green addressable colors in each pixel. With both on in a single pixel you got white text, but could also leverage two adjacent pixels, one with purple and one with green, to produce a half-pixel offset that could produce a smoother diagonal line than typical fixed coordinates.
Standard colorspaces already takes this into account :)
It's important in rendering to take your colorspaces seriously, from the engine driving them to the artist making the art. There are some crazy optimizations that take your perception of colour into account too, texture compression relies on this to some capacity (blues are given less bits, for example).
I've played with this quite a bit over the past few years and want to clarify that the colors of light from the display only combine in your visual system (retina/brain) so that you perceive colors other than red, green, and blue.
The interesting bit here, for me, is that the eyes perceive _as almost the same color_ the yellow in a yellow lemon, and the yellow made by the combination of green and blue from a screen. I find that convergence fascinating.
That is because we also can't actually directly perceive the yellow from that lemon, i.e. we don't possess cones that have their max sensitivity at 570 nm (yellow). Instead yellow is created in our brain by combining the data from the M and L cones: If both signal at about equal intensity, our brain calculates that to be yellow. So the perceived yellow can actually be 570 nm, or 540 nm (yellow-green) plus 600 nm (orange) or similar. Only if the distance between both wavelengths is too high this stops working.
These two different yet same yellows have a fancy name too: metamers. [1] I think it’s super interesting too, and you can even create metamers out of non-metamers using the right light sources. As a trichromat subject to the same metamers as most humans, I want to know what it feels like to be a tetrachromat who can see the differences between colors I can’t tell apart. Or a bi/mono-chromat (aka color blind) where metamers really start to stack up.
Sure, but that is true at the "macro" scale too of (for instance) discrete RGB LEDs, or any other case were color is simulated by emitting red green and blue light.
There is nothing unique in the regard with the sub-pixels except they're small, right?
Correct. If you ever have the chance go to close to one of those enormous displays you would see at a convention center or hanging above a sports arena. They often have LEDs about the size you’d expect when you think of a typical LED.
But with those you’re far enough away that the apparent size of them is similar enough to the sub-pixels in your monitor, so your visual system combines them.
Yes, I just wanted to emphasize that the r, g, b light does not really 'combine' but is instead perceived by people as the additional or secondary colors.
For example, if a display is emitting red and green light then the light reaching a viewer's retina will be red and green light, not yellow light.
RGB light does actually ‘combine’, physically, before we ‘perceive’ the color. It’s because we (most of us) have 3 sensors each with specific wavelength response functions. The physical output of those sensors is the same for a red+green combination as it is for a yellow combination, and therefore the color has been combined as part of measuring the color.
>> RGB light does actually ‘combine’, physically, before we ‘perceive’ the color.
We might be arguing semantics but I'm going to say no, they do not physically combine before we perceive the color. This is supported by the link you provided on metamerism in a related comment.
It’s due to having only 3 types of sensors. They combine as byproduct of capturing the photons, and while this is a part of the visual system indeed, it happens even before the retina and brain get ahold of the signal. The signal itself is inherently representing an already-combined color.
You can do very slow yet smooth scrolling for games by using the subpixels - at least on platforms like e.g. the Game Boy Color where you know the pixel geometry. Of course you need to use 'whole' pixels (composed of RGB) so the perceived color doesn't change.
In the games I make I always track positions at a subpixel level and then I can choose whether or not to draw things with that accuracy depending on my needs. I might want a shape to move more smoothly at a sub-pixel level, or round things to whole pixels. Both ways are useful for different reasons or in different situations.
I do think there's a really interesting alternate world in which we never invented the colorpixel, and image formats instead were represented by an array of grayscale pixels with a repeating color mask.
I hated the color fringes of Cleartype (yes, even after running the optimizer), until my eyes became bad enough that the fringes stopped being so prominent ;). It’s still worse for colored text, though, as in syntax highlighting.
I’m the original millitext creator and I’m very happy to see that someone made a new web-based generator. I wrote my original one in Ruby in 2008 and then stopped writing Ruby soon after that, leaving it to decay as the language’s ecosystem evolved.
The video is really well done and interesting, and makes you think the right way about sub-pixels. Though it does seem a little amusing to me that the process Japhy used is basically exactly the same process the display is already using for antialiasing fonts, he’s just doing it manually. We already have sub-pixel art, pretty much all the time. ;) I haven’t tried, but in theory the hidden message thing at the end can be done purely by reducing your font size to sub 1-pt.
There was also a period in the 2000s where icon graphics were designed with subpixel effects in mind. This was when LCDs displaced CRTs (previously the design took the blurriness of CRTs into account) up until to when the mobile revolution and high-DPI displays started favoring a more scalable solution (at the cost of less nice icons).
I highly recommend people click through the rest of this guy's videos. Lots of fun and playful experiments. :)
This reminded me of the first(?) sub-pixel typeface (by Miha as posted on Typophile in 2009) https://adamnorwood.com/notes/typophile-user-miha-is-doing-s...
And the first sub-pixel font Millitext from 2008, as mentioned in another comment.
Apple II (1977) did an early version of this; it essentially had purple and green addressable colors in each pixel. With both on in a single pixel you got white text, but could also leverage two adjacent pixels, one with purple and one with green, to produce a half-pixel offset that could produce a smoother diagonal line than typical fixed coordinates.
https://en.wikipedia.org/wiki/Subpixel_rendering
Green being perceptibly (perceptually?) brighter to humans than Red and Blue — perhaps you need to dial down the levels when displaying it.
Standard colorspaces already takes this into account :)
It's important in rendering to take your colorspaces seriously, from the engine driving them to the artist making the art. There are some crazy optimizations that take your perception of colour into account too, texture compression relies on this to some capacity (blues are given less bits, for example).
I've played with this quite a bit over the past few years and want to clarify that the colors of light from the display only combine in your visual system (retina/brain) so that you perceive colors other than red, green, and blue.
The interesting bit here, for me, is that the eyes perceive _as almost the same color_ the yellow in a yellow lemon, and the yellow made by the combination of green and blue from a screen. I find that convergence fascinating.
That is because we also can't actually directly perceive the yellow from that lemon, i.e. we don't possess cones that have their max sensitivity at 570 nm (yellow). Instead yellow is created in our brain by combining the data from the M and L cones: If both signal at about equal intensity, our brain calculates that to be yellow. So the perceived yellow can actually be 570 nm, or 540 nm (yellow-green) plus 600 nm (orange) or similar. Only if the distance between both wavelengths is too high this stops working.
These two different yet same yellows have a fancy name too: metamers. [1] I think it’s super interesting too, and you can even create metamers out of non-metamers using the right light sources. As a trichromat subject to the same metamers as most humans, I want to know what it feels like to be a tetrachromat who can see the differences between colors I can’t tell apart. Or a bi/mono-chromat (aka color blind) where metamers really start to stack up.
https://en.wikipedia.org/wiki/Metamerism_(color)
Sure, but that is true at the "macro" scale too of (for instance) discrete RGB LEDs, or any other case were color is simulated by emitting red green and blue light.
There is nothing unique in the regard with the sub-pixels except they're small, right?
Correct. If you ever have the chance go to close to one of those enormous displays you would see at a convention center or hanging above a sports arena. They often have LEDs about the size you’d expect when you think of a typical LED.
But with those you’re far enough away that the apparent size of them is similar enough to the sub-pixels in your monitor, so your visual system combines them.
Yes, I just wanted to emphasize that the r, g, b light does not really 'combine' but is instead perceived by people as the additional or secondary colors.
For example, if a display is emitting red and green light then the light reaching a viewer's retina will be red and green light, not yellow light.
There’s no way to tell the difference.
RGB light does actually ‘combine’, physically, before we ‘perceive’ the color. It’s because we (most of us) have 3 sensors each with specific wavelength response functions. The physical output of those sensors is the same for a red+green combination as it is for a yellow combination, and therefore the color has been combined as part of measuring the color.
>> RGB light does actually ‘combine’, physically, before we ‘perceive’ the color.
We might be arguing semantics but I'm going to say no, they do not physically combine before we perceive the color. This is supported by the link you provided on metamerism in a related comment.
It’s due to having only 3 types of sensors. They combine as byproduct of capturing the photons, and while this is a part of the visual system indeed, it happens even before the retina and brain get ahold of the signal. The signal itself is inherently representing an already-combined color.
There's a generator for those hidden subpixel message textures:
https://jsbin.com/korotaluso/edit?js,output
You can do very slow yet smooth scrolling for games by using the subpixels - at least on platforms like e.g. the Game Boy Color where you know the pixel geometry. Of course you need to use 'whole' pixels (composed of RGB) so the perceived color doesn't change.
In the games I make I always track positions at a subpixel level and then I can choose whether or not to draw things with that accuracy depending on my needs. I might want a shape to move more smoothly at a sub-pixel level, or round things to whole pixels. Both ways are useful for different reasons or in different situations.
I do think there's a really interesting alternate world in which we never invented the colorpixel, and image formats instead were represented by an array of grayscale pixels with a repeating color mask.
side note, cleartype was quite a nice improvement in windows when it came out https://en.wikipedia.org/wiki/ClearType
(also based on subpixel aliasing demonstrated in the video, just not mentioned)
I hated the color fringes of Cleartype (yes, even after running the optimizer), until my eyes became bad enough that the fringes stopped being so prominent ;). It’s still worse for colored text, though, as in syntax highlighting.
I dont understand the last bit where hi hid text message in sub pixels. How did that work?
The rationale is that you can control the position of the visual parts that make a letter by changing the color of the larger pixel.
http://www.msarnoff.org/millitext/
https://millitext.gk.wtf/
I’m the original millitext creator and I’m very happy to see that someone made a new web-based generator. I wrote my original one in Ruby in 2008 and then stopped writing Ruby soon after that, leaving it to decay as the language’s ecosystem evolved.
It’s what might happen if you render ClearType text at 1px wide.