2 comments

  • cyanydeez 3 hours ago

    So I'm curious, whats the actual quality control.

    Like, do these guys actually dog food real user experience, or are they all admins with the fast lane to the real model while everyone outside the org has to go through the 10 layers of model sheding, caching and other means and methods of saving money.

    We all know these models are expensive as fuck to run and these companies are degrading service, A+B testing, and the rest. Do they actually ponder these things directly?

    Just always seems like people are on drugs when they talk about the capabilities, and like, the drugs could be pure shit (good) or ditch weed, and we call just act like the pipeline for drugs is a consistent thing but it's really not, not at this stage where they're all burning cash through infrastructure. Definitely, like drug dealers, you know they're cutting the good stuff with low cost cached gibberish.

    • bigwheels an hour ago

      If you access a model through an openrouter provider it might be quantized (akin to being "cut with trash"), but when you go directly to Anthropic or OpenAI you are getting access to the same APIs as everyone else. Even top-brass folks within Microsoft use Anthropic and OpenAI proper (not worth the red-tape trouble to go directly through Azure). Also, the creator and maintainer of Claude, Boris Cherny, was a bit of an oddball but one of the comparatively nicer people at Anthropic, and he indicated he primarily uses the same Anthropic APIs as everyone else (which makes sense from a product development perspective).

      The underlying models are all actually really undifferentiated under the covers except for the post-training and base prompts. If you eliminate the base prompts the models behave near identically.

      A conspiracy would be a helluva lot more interesting and fun, but I've spoken to these folks firsthand and it seems they already have enough challenges keeping the beast running.