Codex is now in the ChatGPT mobile app

(openai.com)

116 points | by mikeevans 4 hours ago ago

43 comments

  • ahmadyan a few seconds ago

    i'm not sure if i'm hallucinating, but i swear i had codex in the chatGPT app from long time ago (like the original codex on the web).

    they added some new stuff, like remote control to wherever the desktop codex app is running, but these companies need to work much more on their press releases.

  • Alifatisk 2 hours ago

    Whats crazier is that Codex is free. I thought I had to pay to even try it out but nope, you can use the desktop app or cli for free, its apparently included in the free plan. You just have to sign in to your ChatGPT account.

    Of course I am aware that the caveat here is that all my interaction is part of training, but I’m fine with that. Even Qwen Cli discontinued the free plan.

    • thorum 14 minutes ago

      I was really unimpressed by the free Codex (for nodejs/react dev). I think it must be using a less powerful model or they’re limiting it in some other way.

      • jwilliams 5 minutes ago

        Are you specifically pointing at a different experience between free + paid? Or just that the free version is unimpressive?

        I'm using paid on TypeScript and it's genuinely terrific. Subjectively I think it has the edge over Opus.

        I'd be surprised if OpenAI is hamstringing the free version. That would seem crazy from a GTM PoV. If anything the labs seem to throttle the heavy paid users.

    • Rover222 2 hours ago

      I think it's free for about 2 useful requests and then you have to upgrade or wait?

      • osiris970 43 minutes ago

        So basically a 20$ Claude plan lmao

        • replwoacause 29 minutes ago

          I stopped using my Claude subscription because it became so prohibitive. Back to ChatGPT and Codex full time and been pretty happy. I miss the tone/writing style of Claude, but don't miss the frustration of being told I've reached my plan limits in a comically short amount of time.

          • Razengan 7 minutes ago

            on Codex I ran into limits maybe like 2 times in 3 months, after doing several "upgrade this experimental game to my latest shared framework" passes on 5.5 Extra High

  • reassess_blind 41 minutes ago

    Is there a native way to work remotely with a Claude/Codex on a local folder or git repo on your main machine without having to connect it to GitHub? For playing creating apps for personal use I’d rather just keep the files local.

    • barrkel 24 minutes ago

      This is what /remote-control does in Claude Code, once it's running on your main machine. You can open it up in the phone app.

    • Salgat 4 minutes ago

      I wish codex supported this, I use it all the time for claude.

    • iamjs 39 minutes ago

      I think the `/remote-control` feature does this, if I understand you correctly.

      • maille 16 minutes ago

        Does it work on windows? And how do you then remote in?

      • DonsDiscountGas 34 minutes ago

        It's supposed to. I've always found it buggy and unreliable but maybe that's just me. (This command exists in Claude btw not sure about Codex)

  • jumploops an hour ago

    I’ve been using Codex from my phone for the past couple of months (through a tunnel, not this app).

    I was initially quite excited, but I’ve found the results are less than great compared to being at a keyboard.

    Something about the smaller screen size and/or lack of keyboard causes me to direct the agent less, which in turn creates more tech debt/code churn/etc.

    Maybe I’m just showing my age, and I should practice voice dictation or something more, but my thoughts flow faster and more clearly on a keyboard (less ums).

    • aiscoming 32 minutes ago

      the ums are exactly the sign that you speak much faster than you type, so you need a pause for your thoughts to catch up

    • fowlie an hour ago

      I've been trying voxtype (using whisper models) lately, and to my surprise all my ums are filtered out. It's really good now actually!

  • vohk 2 hours ago

    Dang, I thought this was going to be integration for Codex Cloud, not the (still not available for Linux) Codex App. Not even Codex CLI, alas. You can still access the Cloud option from a mobile browser well enough but I prefer an app UI for poking at the things on the go.

    • tekacs 2 hours ago

      You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

      They might just not have cut a new build yet, today. It 'works' on master, but the mobile app thinks that your build is outdated (v0.0.0) if you build from master without overriding version, so probably easiest to wait until they cut a build if they haven't.

      • embedding-shape an hour ago

        > You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

        Woah, hadn't seen this before!

        Off-topic, how long compile times do people have for codex-rs in openai/codex? Even my very beefy computer takes like 30 minutes to compile in release mode, makes me wonder why it's so slow and how this TUI got so large. But then I remember, agents like to write a lot of code, compilers get slower when they have to compile a lot of code :)

        • tekacs 24 minutes ago

          Try turning off LTO. Their default codex-rs/Cargo.toml uses `lto = "fat"`, which is... expensive and slow and... you really really don't need it for a local build that you're not distributing.

          In my experience, although the build is a little slow, it's that LTO step that takes a million years.

      • vohk an hour ago

        Oh, that's promising, thanks! I've just been using the npm version.

      • asadm 2 hours ago

        thanks. i dont use the app and so this is cool

  • iridione an hour ago

    This is neat! Now I'm curious, what's left to innovate in the coding agent space? Sure there are the usual suspects like maintenance, security, reliability and other scalability improvements and looks like they will be addressed in the next year or two.

    • thornewolf an hour ago

      there is something "wrong" with the ux that is hard to pin down. these things generate even text summaries more rapidly than i can read them. i need a better method for dumping info into my brain + dynamic control (if necessary)

      • ssl-3 34 minutes ago

        When I take time to read all of the output, I often find that it's mostly noise. I don't like noise so I usually don't bother.

        But a person can use subagents, if they want, to filter that down. This burns tokens in a big hurry, but I think subagents can be arbitrary local commands (eg, a local LLM).

        Or, you know: Just slow down. :) It doesn't always have to be a race, does it?

  • Razengan 9 minutes ago

    Codex has been great in the last 3-4 months I've been using it, almost exclusively to review existing GDScript code, and this was the feature I wanted most, because with gamedev you get the best ideas when you're out and about or in bed :)

    Claude on the other hand has been jank all around from the UX to the UI to the AI itself that it's baffling how it's more popular here on HN: https://i.imgur.com/jYawPDY.png

    Sadly this remote control feature doesn't seem to be for Mac to Mac yet? I love the MacBook Neo as a "thin client" for AI and keep the MacBook Pro at home/hotel, and it would be nice to share Codex desktop sessions (without SSH → resume link)

  • asadm 2 hours ago

    I use Termius on my phone to remote and make agent do stuff while i chill or am on road. This seems useful too.

  • impulser_ 29 minutes ago

    Say what you want about OpenAI, but their software is actually pretty dam good especially compared to Anthropic and Google. Anthropic is just sloppy, and Google just doesn't live on this planet.

    Both of the Codex apps are very good.

    I tried this out and it works significantly better than Claude's remote control in fact the first few times I tried Claude's remote control it didn't even work and to this day is very buggy.

  • schnitzelstoat 2 hours ago

    This is really useful for when you just need to approve plans or make small decisions.

  • cyanydeez 14 minutes ago

    opencode behind a nginx proxy with a standard user/password is sufficiently powerful. You can also upgrade to https://docs.linuxserver.io/images/docker-code-server/ and run any vscode plugins; opencode's plugin is pretty rudimentry but cline has been making a lot of strides.

    You can run your local LLM and just connect the docker containers. I'm paranoid of being disconnected from the LLM, so I never run any of this on the same machine, so orchestrating a docker-compose file that provides the necessary services is important.

    I'm still trying to find a good remote file system to loop into the setup for improved switching between cli and these web containers.

  • tekacs 2 hours ago

    It's refreshing that unlike Anthropic's Remote Control, this actually... works.

    Feels like a testament to the value in taking time and doing it properly.

    Now if only codex got its 1M token context window back.

    ---

    Edit: Hmmm. Maybe I spoke too soon. Sigh. Definitely _more_ reliable by far overall, but still have queued messages with responses on my phone that don't show up on my computer, and responses that don't show up on my phone.

    Edit 2: New threads created from my phone seem to have a little stall-out, but ones that are underway are behaving reasonably well.

    • 20kleagues 2 hours ago

      Out of curiosity, what issues did you face with remote control on claude? I use it daily and it seems to work pretty well (bar the issues when my Mac would sleep and then the session would disconnect, but that's an issue on my end).

      • tekacs 2 hours ago

        Myriad, to be honest. I find it to just constantly be in a 'torn' state, the UI is very mushy on mobile with a lot of the affordances from desktop missing, and... it's distinctly less useful when you can't... edit, rewind, start a new thread, etc.

      • RayVR 2 hours ago

        My own experience has been that it works for about five minutes before it just disconnects or hangs. I’ve never been able to use it successfully.

  • fHr an hour ago

    rust and opensource W

  • stavros 2 hours ago

    The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).

    It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.

    [0] https://github.com/skorokithakis/symphony

  • Squab an hour ago

    friends, you don’t have to always be productive. leave the agent on the computer and take care of yourself.

    • jorl17 an hour ago

      For many people, that's exactly why this is useful: less time on the computer, more time doing other things and occasionally checking in.

      In those scenarios, the goal is not "work at any time" but to "be anywhere at any time", or, rather, to "be able to work from anywhere, doing anything".

      Sort of....I guess.

  • mv4 2 hours ago

    Can someone recommend an IDE that can be used with a self-hosted model (via OpenAI or similar)?

    • aiscoming 24 minutes ago

      vs code supports local models (bring your own key/model)

      you need a model server - ollama/llama.cpp/lm studio