Nitpicking, but if you're writing about text rendering you should know:
Yes, ligatures are really about presentation and not semantics. For example fi (U+FB01) means the same thing as fi; it just looks neater in some situations.
æ (U+00E6) is not a ligature; it's a mostly obsolete character, with different semantics (or phonetics) than ae.
For example, for purely typsetting beauty, your word processor might substitute the ligature fi for the two letters fi (which can f* search, and I resent both the ligature and lazy search function developers). It would never substitute æ for ae; that would misspell the word as much as substituting an o.
> æ (U+00E6) is not a ligature; it's a mostly obsolete character, with different semantics (or phonetics) than ae.
Reading that a letter in my alphabet is mostly obsolete feels really weird. No rebuttal, just a comment.
> It would never substitute æ for ae; that would misspell the word as much as substituting an o.
While that is correct, a lot of other systems actually do this exact substition. If your name contains æ it will be substituted with ae in passports, plane tickets and random other systems throughout your life.
My own username on this website is an example of a similar substition. The oe should be read as the single character ø.
Few more additional ones, more about editing than just rendering:
The style change mid ligature has a related problem. While it might be reasonable not to support style change in the middle of ligature, you still want to select individual letters within ligatures like "ff", "ffi" and "fl". The problem just like with color change is that neither the text shaper nor program rendering text knows where each individual letter within ligature glyph is positioned. Font simply lacks this information.
From what I have seen most programs which support it use similar approximation as what Firefox uses for coloring - split the ligature into equal parts. Works good enough for something like "fi", "fl" not so much for some of ligatures within programming fonts that combine >= into ≥.
There are even worse edge cases in scripts for other languages. There are ligatures which look roughly like the 2 characters which formed it side by side but in reverse order. There are also some ligatures in CJK fonts which combine 4 characters in a square.
Backspace erases characters at finer granularity than it's possible to select them.
With regards to LTR/RTL selection weirdness I recently discovered that some editors display small flag on the cursor displaying current position direction when it's in mixed direction text.
The user may not think of the letters as connected. Suppose the user wanted to write "stuffing" and bold the letters "ing". The user may well not realize that the font thinks of "ffi" as anything other than three separate letters.
Ligatures like in "stuffing" isn't the worst case for mid ligature styling. You could introduce a split between stuff and ing preventing the forming of ligature and it would likely still look reasonable. That's actually one of the most straight forward things you can do for text layout, split text into runs with same style and then shape each run separately. That's also how you end up with mess shown in Safari screenshot.
In non English scripts where ligatures are less optional things are trickier. Not applying the ligature can significantly affect the look. In some fonts/scripts where ligatures are used for diacritic marks or syllable based combinations of characters into single glyph.
Another aspect of mid ligature color changes is that if you allow color you probably allow any other style change including font size or the font itself, which in turn can have completely different size and shaped glyph for the corresponding ligature and even different set of ligatures. Thus making drawing of corresponding characters as single ligature impossible.
One of the most warranted and also one of the trickiest cases for wanting mid ligature style change is language education materials. You might want to highlight individual subcomponents of complex character combinations to explain rules behind them. For these cases the firefox splitting hack is not good enough. Although it seems like in current version of Firefox on Linux न्हृे is handling much better than in 2019 screenshot. This might be as much as improvement in Firefox and underlying libraries as it was in font. At the end of day if font draws a complex character combination as single shape there is nothing font rendering software can do to correctly split into logical components. Instead of ligatures you can draw such characters as multiple overlapping and appropriately placed glyphs (possibly in combination with context aware substitutions). Kind of like zalgo text, no font has separate glyphs for each letter with every combination of 20 stacked diacritic marks. That way the information about components isn't lost making it technically possible to correctly style each of them, but it's still not easy.
And start with simpler regular rules and get more complex over time as words are imported and reimported, pronunciations shift, grammatical rules morph and evolve (often to simplify grammatical genders and cases) while leaving their mark, and spelling changes.
For example, goose/geese is the result of the plural form and singular form undergoing different paths in the Great Vowel Shift resulting in the different vowels in the modern form.
There's also evidence that Proto-Indo-European had laryngeal consonants that have disappeared in all modern languages derived from it [1], but have left their mark on the descendant languages.
> the notion that it lacks nuance to describe the intricacies of text rendering
I took this to mean that any non-domain-specific language may be bad at describing that domain, e.g. why physicists, mathematicians, chemists, etc. have a common symbology for the discipline, or why programming languages exist. i.e., not so much that English is uniquely bad among written human language for conveying these topics, but just that any non-specialized language may be.
Though, I think the author did a fair job, but I lack the domain experience to guess at where the misconceptions might lie.
> I'm very impressed that anyone who speaks another language natively can get good at it.
From my completely anecdotal observations, native speakers are the worst at English. They struggle with homophones, prepositions, tenses, confuse meanings of words, apostrophes and I could go on and on.
English grammar is easier to learn by reading and writing than speaking, what most native speakers do.
Its/it's, they/their/they're, who's/whose, prepositions like a lot, a while and confused words like definitely and defiantly are the first that come to mind. See if you are better than a foreigner.
As an example of this, native German speakers are often better at knowing when to use "who" vs "whom" because German grammar rules are in some ways a superset of English grammar rules.
As a native english speaker, i did try to learn german but eventually gave up. A language sprinkled with "learn by wrote" gender prefixes for every item is just not worth learning. I did have an issue with the numbers being back to front once you get to the unit value but then someone pointed out english does that too for the values 13-19... so there ya go.
> A language sprinkled with "learn by wrote" gender prefixes for every item is just not worth learning.
Bantu languages, which cover much of subsaharan Africa, have many noun classes ("genders")—sometimes as many as 20. You have to learn all sorts of prefixes for each noun class depending on their grammatical role in tying to the noun.
However, it's really not so bad. Once you get the hang of the noun classes, it actually makes picking up the ear for it faster. Of course this is more true the more consistent the language in applying its internal rules.
I've tried to ask this before in various contexts and I've never been able to find an answer but maybe commenters on a post like this would know.
I like the way that the CJK fonts render without anti-aliasing on windows. I want to know why and how to cause windows to render a non-cjk font of my choosing in this aliased style. I am not opposed to hex-editing or otherwise modifying the font if that's necessary. I've never been able to find information bout the mechanism or how it's triggered.
right, i can solve this okay by rendering an image and then putting transparent text over it in order to preserve editability, but it's such a pain in the ass, and i know windows is capable of doing it because it does do it, i'm not looking for a solution, i want to understand a facet of windows font rendering
The ligatures part of this article gets me every time I re-read it. I think reading this article may have been the first time I realized that even large, well-funded projects are still done by people who are just regular humans, and sometimes settle for something that's good enough.
The AGG (“Anti-Grain Geometry”) library does something similar[1], from what I understand.
Also, I had (though never tested) the impression that in the Windows world ClearType uses 3x the horizontal resolution internally (I vaguely remember that being mentioned in the horror novel^W^W Raster Tragedy[2] somewhere?..). Given many font designers’ testing process for their hinting bytecode seems to be to run it through ClearType and check if it looks OK (not unlike firmware programmers...), we all, including Microsoft, are essentially stuck with that choice forever (or at least until people with painfully low-res displays become rare enough that the complaining about blurry text can be disregarded). So I’d expect 1/3 of a pixel to be the natural resolution for a glyph cache, not 1/4? Or have things changed in the transition from GDI to GDI+ to DirectWrite?
The issue (i think) is that the animation is done post-rasterizing. So a translate of integer pixels is fine, but scale? Skew? Suddenly you have really visible colour fringing appearing out of nowhere.
The article is talking about "rerasterize the glyphs in their new location", which means it's rasterizing post animation. I think he's implying that there is something unstable with his each pixel is treated that breaks the illusion.
Hmm I use Firefox and the rendering I see in Firefox looks nothing like the render the author gets in Firefox; in fact the text rendering I get looks very similar to the "Chrome" rendering. Obviously this must depend on the libraries linked during the build process.
Depending on your OS Firefox will select from multiple rendering backends based on your GPU, driver etc.
On Windows it may or may not be using DirectWrite for text rasterization as a general thing, and in some cases text might be rasterized using a different fallback path if DirectWrite can't handle the font, I think.
IIRC this was/is true for Chrome as well, where in some cases it software rasterizes text using Skia instead of calling through to the OS's font implementation.
IIRC, Chrome now uses CoreText/DirectWrite for system fonts on macOS/Windows, and Skrifa (FreeType rewritten in Rust) outlines rasterized with Skia for everything else (system fonts on Linux, web fonts on all platforms).
I believe Firefox leans on the system raserizers a little more heavily (using them for everything they support), and also still uses FreeType on Linux.
> Don’t ask about the code which line-breaks partial ligatures though.
Wondered about this. All the circular dependencies sound like you could feasibly get some style/layout combinations that lead to self-contradictory situations.
E.g. consider a ligature that's wider than the characters' individual glyphs. If the ligature is at the end of the box, it could trigger a line break. But that line break would also break up the ligature and cause the characters to be rendered as individual glyphs, reducing their width - which would undo the line break. But without the line break, the ligature would reconnect, increase the width and restore the line break, etc etc...
Blink's (Chromium) text layout engine works the following way.
1. Layout the entire paragraph of text as a single line.
2. If this doesn't fit into the available width, bisect to the nearest line-break opportunity which might fit.
3. Reshape the text up until this line-break opportunity.
4. If it fits great! If not goto 2.
This converges as it always steps backwards, and avoids the contradictory situations.
Harfbuzz also provides points along the section of text which is safe to reuse, so reshaping typically involes only a small portion of text at the end of the line, if any.
https://github.com/harfbuzz/harfbuzz/issues/224
This approach is different to how many text layout engines approach this problem e.g. by adding "one word at a time" to the line, and checking at each stage if it fits.
> This approach is different to how many text layout engines approach this problem e.g. by adding "one word at a time" to the line, and checking at each stage if it fits.
We found it was roughly on par performance wise for simple text (latin), and faster for more complex scripts (thai, hindi, etc). It also is more correct when there is kerning across spaces, hyphenation, etc.
For the word-by-word approach to be performant you need a cache for each word you encounter. The shape-by-paragraph approach we found was faster for cold-start (e.g. the first time you visit a webpage). But this is also more difficult to show in standard benchmarks as benchmarks typically reuse the same renderer process.
> So subpixel-AA is a really neat hack that can significantly improve text legibility, great! But, sadly, it’s also a huge pain in the neck!
Especially when you have a monitor with unusual subpixel layout, which is very common for OLEDs that don't have any standard for it. In practice, developers of common font libraries like FreeType simply didn't bother with trying to support all that. And that trickles down to toolkits like Qt. Surprising the article doesn't mention this major problem with modern displays.
> Retina displays really don’t need it
Assuming this means high resolution displays - unfortunately that's not always what you end up using. So subpixel antialiasing can still be useful, if it can work. But as above, it's often just broken on OLEDs.
Arguably monitors that are not mere TVs ought to allow control of each distinct pixel they drive internally and communicate their layout and if needed distinct brightness/color coordinates to the host.
Exceptions can apply if the consumers of the screen can't resolve details finer than "emulated sRGB pixels" anyways.
Something like that should be done in EDIDs may be, but you still would need to support a ton of different layouts in the end. LCD monitors are a lot more limited in that sense.
How did they get the exact effect to show what they want in the text here instead of say, me seeing the exact same visuals for each browser as I am reading it from a single browser?
I missed some UI improvement in browsers then as I can copy and paste them as text, and even the italic emoji example carried over the italic information when I tried copying it into various editors.
The real takeaway from the article is that you can rathole forever on ill-defined problems. Decide upfront whether you care about actual humans and their usecases or hypothetical humans and their hypothetical usecases.
Or even, which subset of humans' uses cases you wish to concern yourself with as you can't always please everyone or tackle everyone's problems. If one only cared about a single language everything becomes much easier.
> who really cares if “æ” is written as “ae”?
Nitpicking, but if you're writing about text rendering you should know:
Yes, ligatures are really about presentation and not semantics. For example fi (U+FB01) means the same thing as fi; it just looks neater in some situations.
æ (U+00E6) is not a ligature; it's a mostly obsolete character, with different semantics (or phonetics) than ae.
For example, for purely typsetting beauty, your word processor might substitute the ligature fi for the two letters fi (which can f* search, and I resent both the ligature and lazy search function developers). It would never substitute æ for ae; that would misspell the word as much as substituting an o.
> æ (U+00E6) is not a ligature; it's a mostly obsolete character, with different semantics (or phonetics) than ae.
Reading that a letter in my alphabet is mostly obsolete feels really weird. No rebuttal, just a comment.
> It would never substitute æ for ae; that would misspell the word as much as substituting an o.
While that is correct, a lot of other systems actually do this exact substition. If your name contains æ it will be substituted with ae in passports, plane tickets and random other systems throughout your life.
My own username on this website is an example of a similar substition. The oe should be read as the single character ø.
> _lazy_ search function developers
doing non-ascii first needs awareness and then quickly becomes tricky (encodings yay).
getting combining characters and/or homoglyphs right is hard.
and if you're still bored out: have fun with Unicode confusables.txt ...
with this in mind I dare to give them lazy bums the honor of the doubt and rather call them something between naïve and scared.
> mostly obsolete
The Nordic languages beg to differ!
Keep Swedish out of this, you dirty Danes!
Edit: Checked out your profile, correcting myself: "you silly north-Danes!"
Few more additional ones, more about editing than just rendering:
The style change mid ligature has a related problem. While it might be reasonable not to support style change in the middle of ligature, you still want to select individual letters within ligatures like "ff", "ffi" and "fl". The problem just like with color change is that neither the text shaper nor program rendering text knows where each individual letter within ligature glyph is positioned. Font simply lacks this information.
From what I have seen most programs which support it use similar approximation as what Firefox uses for coloring - split the ligature into equal parts. Works good enough for something like "fi", "fl" not so much for some of ligatures within programming fonts that combine >= into ≥.
There are even worse edge cases in scripts for other languages. There are ligatures which look roughly like the 2 characters which formed it side by side but in reverse order. There are also some ligatures in CJK fonts which combine 4 characters in a square.
Backspace erases characters at finer granularity than it's possible to select them.
With regards to LTR/RTL selection weirdness I recently discovered that some editors display small flag on the cursor displaying current position direction when it's in mixed direction text.
> some editors display small flag on the cursor displaying current position direction
I was amazed to see IDEA/RustRover doing exactly this [1] when I added BIDI texts to my code to test things out.
[1] https://i.imgur.com/Qqlyqpc.png (image taken from IDEA issue tracker)
I cannot imagine a use case where I would want to do a style change mid ligature. Can someone smarter than I am give a reasonable example of doing so?
The user may not think of the letters as connected. Suppose the user wanted to write "stuffing" and bold the letters "ing". The user may well not realize that the font thinks of "ffi" as anything other than three separate letters.
Ligatures like in "stuffing" isn't the worst case for mid ligature styling. You could introduce a split between stuff and ing preventing the forming of ligature and it would likely still look reasonable. That's actually one of the most straight forward things you can do for text layout, split text into runs with same style and then shape each run separately. That's also how you end up with mess shown in Safari screenshot. In non English scripts where ligatures are less optional things are trickier. Not applying the ligature can significantly affect the look. In some fonts/scripts where ligatures are used for diacritic marks or syllable based combinations of characters into single glyph.
Another aspect of mid ligature color changes is that if you allow color you probably allow any other style change including font size or the font itself, which in turn can have completely different size and shaped glyph for the corresponding ligature and even different set of ligatures. Thus making drawing of corresponding characters as single ligature impossible.
One of the most warranted and also one of the trickiest cases for wanting mid ligature style change is language education materials. You might want to highlight individual subcomponents of complex character combinations to explain rules behind them. For these cases the firefox splitting hack is not good enough. Although it seems like in current version of Firefox on Linux न्हृे is handling much better than in 2019 screenshot. This might be as much as improvement in Firefox and underlying libraries as it was in font. At the end of day if font draws a complex character combination as single shape there is nothing font rendering software can do to correctly split into logical components. Instead of ligatures you can draw such characters as multiple overlapping and appropriately placed glyphs (possibly in combination with context aware substitutions). Kind of like zalgo text, no font has separate glyphs for each letter with every combination of 20 stacked diacritic marks. That way the information about components isn't lost making it technically possible to correctly style each of them, but it's still not easy.
Excellent example! Thanks.
> Few more additional ones, more about editing than just rendering
Right after TFA was published, someone put together the text editing version: Text editing hates you too: https://lord.io/text-editing-hates-you-too/
(2019) Popular in:
2023 (290 points, 119 comments) https://news.ycombinator.com/item?id=36478892
2022 (399 points, 154 comments) https://news.ycombinator.com/item?id=30330144
2019 (542 points, 170 comments) https://news.ycombinator.com/item?id=21105625
> Text is complicated
So true!
> and english is bad at expressing these nuances.
I think English is a terrible shitpile of grammar and syntax. I'm very impressed that anyone who speaks another language natively can get good at it.
But I'm interested in the notion that it lacks nuance to describe the intricacies of text rendering. Can someone tell me where that would apply?
> I think English is a terrible shitpile of grammar and syntax
Spoken languages are like programming languages, there are the ones people complain about and the ones nobody uses.
And start with simpler regular rules and get more complex over time as words are imported and reimported, pronunciations shift, grammatical rules morph and evolve (often to simplify grammatical genders and cases) while leaving their mark, and spelling changes.
For example, goose/geese is the result of the plural form and singular form undergoing different paths in the Great Vowel Shift resulting in the different vowels in the modern form.
There's also evidence that Proto-Indo-European had laryngeal consonants that have disappeared in all modern languages derived from it [1], but have left their mark on the descendant languages.
[1] https://en.wikipedia.org/wiki/Laryngeal_theory
Then there are also the lovely instances of deliberate misspellings / insertion of letters into words that never had them in English.
Eg. receipt, which has the p only in Latin, but had long lost it by the time Old French brought it to Britain.
> the notion that it lacks nuance to describe the intricacies of text rendering
I took this to mean that any non-domain-specific language may be bad at describing that domain, e.g. why physicists, mathematicians, chemists, etc. have a common symbology for the discipline, or why programming languages exist. i.e., not so much that English is uniquely bad among written human language for conveying these topics, but just that any non-specialized language may be.
Though, I think the author did a fair job, but I lack the domain experience to guess at where the misconceptions might lie.
I had much the same conclusions. The author did a perfectly good job of explaining the issues.
> I'm very impressed that anyone who speaks another language natively can get good at it.
From my completely anecdotal observations, native speakers are the worst at English. They struggle with homophones, prepositions, tenses, confuse meanings of words, apostrophes and I could go on and on.
English grammar is easier to learn by reading and writing than speaking, what most native speakers do.
Its/it's, they/their/they're, who's/whose, prepositions like a lot, a while and confused words like definitely and defiantly are the first that come to mind. See if you are better than a foreigner.
As an example of this, native German speakers are often better at knowing when to use "who" vs "whom" because German grammar rules are in some ways a superset of English grammar rules.
As a native english speaker, i did try to learn german but eventually gave up. A language sprinkled with "learn by wrote" gender prefixes for every item is just not worth learning. I did have an issue with the numbers being back to front once you get to the unit value but then someone pointed out english does that too for the values 13-19... so there ya go.
Learn by ‘rote’.
> A language sprinkled with "learn by wrote" gender prefixes for every item is just not worth learning.
Bantu languages, which cover much of subsaharan Africa, have many noun classes ("genders")—sometimes as many as 20. You have to learn all sorts of prefixes for each noun class depending on their grammatical role in tying to the noun.
However, it's really not so bad. Once you get the hang of the noun classes, it actually makes picking up the ear for it faster. Of course this is more true the more consistent the language in applying its internal rules.
I've tried to ask this before in various contexts and I've never been able to find an answer but maybe commenters on a post like this would know.
I like the way that the CJK fonts render without anti-aliasing on windows. I want to know why and how to cause windows to render a non-cjk font of my choosing in this aliased style. I am not opposed to hex-editing or otherwise modifying the font if that's necessary. I've never been able to find information bout the mechanism or how it's triggered.
https://int10h.org/blog/2016/01/windows-cleartype-truetype-f...
http://www.electronicdissonance.com/2010/01/raster-fonts-in-...
Just disable ClearType and all your text will be uniform :)
Well that's not what I want, I want to specifically prevent some passages in text from being rendered as anti-aliased for art reasons.
At some point, if you're doing it for art reasons, it makes the most sense to just render to an image.
right, i can solve this okay by rendering an image and then putting transparent text over it in order to preserve editability, but it's such a pain in the ass, and i know windows is capable of doing it because it does do it, i'm not looking for a solution, i want to understand a facet of windows font rendering
The ligatures part of this article gets me every time I re-read it. I think reading this article may have been the first time I realized that even large, well-funded projects are still done by people who are just regular humans, and sometimes settle for something that's good enough.
"Subpixel offsets break glyph caches"
I once resolved that by keeping a vertically shrunken but really wide glyph around in a cache. Just resample it for a different horizontal offset.
The AGG (“Anti-Grain Geometry”) library does something similar[1], from what I understand.
Also, I had (though never tested) the impression that in the Windows world ClearType uses 3x the horizontal resolution internally (I vaguely remember that being mentioned in the horror novel^W^W Raster Tragedy[2] somewhere?..). Given many font designers’ testing process for their hinting bytecode seems to be to run it through ClearType and check if it looks OK (not unlike firmware programmers...), we all, including Microsoft, are essentially stuck with that choice forever (or at least until people with painfully low-res displays become rare enough that the complaining about blurry text can be disregarded). So I’d expect 1/3 of a pixel to be the natural resolution for a glyph cache, not 1/4? Or have things changed in the transition from GDI to GDI+ to DirectWrite?
[1] https://agg.sourceforge.net/antigrain.com/research/font_rast...
[2] http://rastertragedy.com/
>But if the transform is an animation this will actually look even worse
I wish they provided an example video of this since I can't visualize it. My natural thinking is subpixel antialiasing should look fine.
>the characters will jiggle as each glyph bounces around between different subpixel snappings and hints on each frame.
This shouldn't be a big issue unless your animation is slow and your subpixels are big.
The issue (i think) is that the animation is done post-rasterizing. So a translate of integer pixels is fine, but scale? Skew? Suddenly you have really visible colour fringing appearing out of nowhere.
>the animation is done post-rasterizing.
The article is talking about "rerasterize the glyphs in their new location", which means it's rasterizing post animation. I think he's implying that there is something unstable with his each pixel is treated that breaks the illusion.
Hmm I use Firefox and the rendering I see in Firefox looks nothing like the render the author gets in Firefox; in fact the text rendering I get looks very similar to the "Chrome" rendering. Obviously this must depend on the libraries linked during the build process.
The article is from 2019, things might also simply have changed since then.
Depending on your OS Firefox will select from multiple rendering backends based on your GPU, driver etc.
On Windows it may or may not be using DirectWrite for text rasterization as a general thing, and in some cases text might be rasterized using a different fallback path if DirectWrite can't handle the font, I think.
IIRC this was/is true for Chrome as well, where in some cases it software rasterizes text using Skia instead of calling through to the OS's font implementation.
IIRC, Chrome now uses CoreText/DirectWrite for system fonts on macOS/Windows, and Skrifa (FreeType rewritten in Rust) outlines rasterized with Skia for everything else (system fonts on Linux, web fonts on all platforms).
I believe Firefox leans on the system raserizers a little more heavily (using them for everything they support), and also still uses FreeType on Linux.
> Don’t ask about the code which line-breaks partial ligatures though.
Wondered about this. All the circular dependencies sound like you could feasibly get some style/layout combinations that lead to self-contradictory situations.
E.g. consider a ligature that's wider than the characters' individual glyphs. If the ligature is at the end of the box, it could trigger a line break. But that line break would also break up the ligature and cause the characters to be rendered as individual glyphs, reducing their width - which would undo the line break. But without the line break, the ligature would reconnect, increase the width and restore the line break, etc etc...
Blink's (Chromium) text layout engine works the following way.
1. Layout the entire paragraph of text as a single line.
2. If this doesn't fit into the available width, bisect to the nearest line-break opportunity which might fit.
3. Reshape the text up until this line-break opportunity.
4. If it fits great! If not goto 2.
This converges as it always steps backwards, and avoids the contradictory situations.
Harfbuzz also provides points along the section of text which is safe to reuse, so reshaping typically involes only a small portion of text at the end of the line, if any. https://github.com/harfbuzz/harfbuzz/issues/224
This approach is different to how many text layout engines approach this problem e.g. by adding "one word at a time" to the line, and checking at each stage if it fits.
> This approach is different to how many text layout engines approach this problem e.g. by adding "one word at a time" to the line, and checking at each stage if it fits.
Do you know why Chrome does it this way?
We found it was roughly on par performance wise for simple text (latin), and faster for more complex scripts (thai, hindi, etc). It also is more correct when there is kerning across spaces, hyphenation, etc.
For the word-by-word approach to be performant you need a cache for each word you encounter. The shape-by-paragraph approach we found was faster for cold-start (e.g. the first time you visit a webpage). But this is also more difficult to show in standard benchmarks as benchmarks typically reuse the same renderer process.
And the companion article: https://lord.io/text-editing-hates-you-too/
(posted in other other threads too)
> So subpixel-AA is a really neat hack that can significantly improve text legibility, great! But, sadly, it’s also a huge pain in the neck!
Especially when you have a monitor with unusual subpixel layout, which is very common for OLEDs that don't have any standard for it. In practice, developers of common font libraries like FreeType simply didn't bother with trying to support all that. And that trickles down to toolkits like Qt. Surprising the article doesn't mention this major problem with modern displays.
> Retina displays really don’t need it
Assuming this means high resolution displays - unfortunately that's not always what you end up using. So subpixel antialiasing can still be useful, if it can work. But as above, it's often just broken on OLEDs.
Arguably monitors that are not mere TVs ought to allow control of each distinct pixel they drive internally and communicate their layout and if needed distinct brightness/color coordinates to the host.
Exceptions can apply if the consumers of the screen can't resolve details finer than "emulated sRGB pixels" anyways.
Something like that should be done in EDIDs may be, but you still would need to support a ton of different layouts in the end. LCD monitors are a lot more limited in that sense.
How did they get the exact effect to show what they want in the text here instead of say, me seeing the exact same visuals for each browser as I am reading it from a single browser?
You mean in the parts that say "Here's what they look like in Safari" and so on? Those are just .pngs.
I missed some UI improvement in browsers then as I can copy and paste them as text, and even the italic emoji example carried over the italic information when I tried copying it into various editors.
It's just transparent text over png background.
Good. I hated it first!
The real takeaway from the article is that you can rathole forever on ill-defined problems. Decide upfront whether you care about actual humans and their usecases or hypothetical humans and their hypothetical usecases.
Or even, which subset of humans' uses cases you wish to concern yourself with as you can't always please everyone or tackle everyone's problems. If one only cared about a single language everything becomes much easier.
> If one only cared about a single language everything becomes much easier.
Yes. Let's be thankful that isn't the case for browsers and major GUI toolkits though.