8 comments

  • 2001zhaozhao 14 hours ago

    If they actually put GPT5.2 into Cerebras I'm switching to OpenAI subscription instantly

  • Alifatisk 2 days ago

    > By keeping computation and memory on a single wafer-scale processor, we eliminate the data-movement penalties that dominate GPU systems. The result is up to 15× faster inference, without sacrificing model size or accuracy.

    https://xcancel.com/andrewdfeldman/status/201154226777402186...

  • alcasa 2 days ago
  • kristianp a day ago

    Will be interesting to see how it is integrated. I assume it will be only a small fraction of openai's total inference.

  • kingstnap a day ago

    > real-time AI

    Guessing the plan might be for voice AI. That stuff needs to be real snappy.

    • aitchnyu a day ago

      I hope all AI will reach 300ms response times, including 200 line diffs. Querying a million rows or informing user that a codebase is wrong used to take minutes but now happen instantly.

  • e40 a day ago

    And the data centers created from this initiative will be net positive for the environment and communities, right? Right?

    • whateverboat a day ago

      yes, because climate change simulations will become cheaper.