11 comments

  • cheaprentalyeti 2 months ago

    It's an interesting paper, but I think the design decisions made here were less intentional than they seemed. The hardware producers were not making these decisions. They had CRT's and not modern LED's and made lemonade. And we were a lot younger in 1988.

    • fuzzfactor 2 months ago

      I get the idea that the green phosphor is easiest to see when it comes to contrast.

      On the old ocilloscopes when you were getting some signals near the limit of device capability the traces could get pretty thin and hard to see sometimes.

      With a less visible phosphor it might not have been possible to see anything at all at that point.

      The green did seem to be a commodity for decades before the amber started becoming more common, never did prevail though.

      I had two industrial monitors for non-PC's in the '80's that were vector-based and higher resolution than PC's had. Green was standard when launched, amber later became an option, and I ended up with each.

      Liked them both :)

      Top ocilloscope CRTs had already advanced way beyond the commodity green by then.

      • CalvinBuild 2 months ago

        Totally. That matches how I’ve been thinking about it too: green was a “most visible per unit beam energy” choice, so you got usable contrast even when the trace/text got thin or you were operating near the limits. With a dimmer phosphor you’d just lose the signal.

        Also agree on the “commodity default” point. Green was the cheap bright workhorse for a long time, and amber felt more like an option when manufacturers could justify it (and when the use case was heavier on text comfort vs max visibility). Your vector monitor anecdote is exactly the kind of real-world constraint story I was trying to get at.

        And yeah, high-end scopes were a different world: once you’re optimizing instrumentation, you start paying for different phosphors/persistence/sharpness tradeoffs instead of the mass-terminal defaults.

    • CalvinBuild 2 months ago

      Fair. The title is a bit clickbaity. The more accurate claim is: a lot of CRT choices were constraint-driven (power, heat, phosphor efficiency, flicker, tube life), and those constraints often produced more readable, lower-fatigue defaults than some modern “max brightness/high contrast” settings. What fascinated me is how often those engineering constraints ended up lining up with human biology. Also yes, being younger in 1988 probably masked a lot of strain that shows up fast now.

  • bell-cot 2 months ago

    First thought: Development of amber & green CRT's was driven by real-world use - not consumer preferences. The military was especially focused on ergonomics in the decades after WWII - and for them, the failure of a fatigued operator to notice and process some data on a crummy display could get everyone killed.

    Second thought: https://en.wikipedia.org/wiki/Photoreceptor_cell#Difference_... And slow reaction helps reduce fatigue for the kinds of information usually viewed on old amber and green CRT's.

    • CalvinBuild 2 months ago

      That’s a really good framing. I agree the driver was real-world operator endurance (military/industrial), not consumer preference. In those contexts, “fatigued operator misses something” is a real failure mode, so readability/comfort gets treated like performance.

      Also +1 on photoreceptors. The rods/cones split and sensitivity shifts in low light are a big part of why certain wavelengths and lower absolute luminance can feel disproportionately readable at night. I’m less confident that “slow reaction” is the main fatigue reducer (persistence trades off smear vs flicker/visibility), but the broader point about temporal characteristics affecting comfort is spot on.

  • apothegm 2 months ago

    Yes. The current obsession with high contrast in LED screens absolutely contributes to fatigue. It’s fantastic for sunny locations or for watching films. But absolutely terrible for eye strain when trying to read in the absence of direct sunlight.

    I use an app that lets me pump up the brightness and contrast to see clearly when the sun is out but decrease brightness and contrast below even what the monitor thinks is it’s zero-point at night because even that zero point is far too bright.

    • CalvinBuild 2 months ago

      100%. The “high contrast” obsession makes sense for daylight and media, but it’s rough for long-form text at night.

      I think a lot of the fatigue is absolute luminance and black level, not just contrast ratio. Modern panels often can’t get dim enough (and their lowest backlight still isn’t “dark”), so you end up fighting the display. Your approach of boosting for sun and going below the monitor’s nominal minimum at night is basically what an ergonomic default should do.

      Out of curiosity, what app are you using to go below the monitor’s stated minimum?

  • mtmail 2 months ago

    The reference links have 'utm_source=chatgpt'. I don't trust that they whole article wasn't written by an LLM.

    • CalvinBuild 2 months ago

      Good catch. The utm_source=chatgpt params were accidental from a copied link format, and I’m removing them. For transparency: I used an LLM for proofreading and phrasing/formatting only. The research and argument are mine.

  • hollerith 2 months ago

    I dispute the premise that CRTs were easier to read.

    The flicker in particular was problematic.