2 comments

  • al_borland 12 hours ago

    Cal Newport covered this topic in his latest AI reality check. [0]

    I think LLMs have done a passible job mimicking what thinking looks like, without actually thinking. I'm still constantly having to correct it when its reasoning takes a left turn. I can't think of any human I'm still willing to speak to that needs to be corrected as much as AI.

    There might be a Venn diagram with those various reasoning modes and AGI is in the middle of it, but I don't think the current technology is going to get us there.

    [0] https://youtu.be/sS3C_i7gkI8?si=7yGNLVtOTnM6RMB7&t=61