2 comments

  • hrudolph 7 hours ago

    # Roo Code 3.43.0 Release Updates

    @everyone This release updates Intelligent Context Condensation, removes deprecated settings, and fixes export and settings issues.

    ## Intelligent Context Condensation v2

    Intelligent Context Condensation runs when the conversation is near the model’s context limit. It summarizes earlier messages instead of dropping them. After a condense, Roo continues from a single summary, not a mix of summary plus a long tail of older messages. If your task starts with a slash command, Roo preserves those slash-command-driven directives across condenses. Roo is less likely to break tool-heavy chats during a condense, which reduces failed requests and missing tool results.

    Settings changes: the Condense prompt editor is now under *Context Management* and Reset clears your override. Condensing uses the active conversation model/provider. There is no separate model/provider selector for condensing.

    ## QOL Improvements

    - Removes the unused “Enable concurrent file edits” experimental toggle to reduce settings clutter. - Removes the experimental *Power Steering* setting (a deprecated experiment that no longer improves results). - Removes obsolete diff/match-precision provider settings that no longer affect behavior. - Adds a `pnpm install:vsix:nightly` command to make installing nightly VSIX builds easier.

    ## Bug Fixes

    - Fixes an issue where MCP config files saved via the UI could be rewritten as a single minified line. Files are now pretty-printed. (thanks Michaelzag!) - Fixes an issue where exporting tasks to Markdown could include `[Unexpected content type: thoughtSignature]` lines for some models. Exports are now clean. (thanks rossdonald!) - Fixes an issue where the *Model* section could appear twice in the OpenAI Codex provider settings.

    ## Misc Improvements

    - Removes legacy XML tool-calling code paths that are no longer used, reducing maintenance surface area.

    ## Provider Updates

    - Updates Z.AI models with new variants and pricing metadata. (thanks ErdemGKSL!) - Corrects Gemini 3 pricing for Flash and Pro models to match published pricing. (thanks rossdonald!)

    • dexdal 7 hours ago

      Context condensation only stays safe when it behaves like a controlled artefact. Preserve the active directives, freeze a small set of must-keep facts, and treat the summary as versioned output with a stop rule when it drops constraints. That turns “near the limit” from random truncation into repeatable workflow.