News
Apr 5, 2026
Stoque Finalizes Rectask Acquisition to Consolidate Credit and Financial Automation Portfolio
News
Enterprise
Artificial Intelligence
Americas
NewDecoded
3 min read

Image by Claude
Anthropic released a research preview of Auto mode for its agentic coding assistant, Claude Code, on March 24, 2026. This feature introduces a middle path for developers who need to run long-running tasks without constant manual approvals. It is currently available for users on the Team plan, with rollouts to Enterprise and API users expected soon.
The update addresses a common friction point where the assistant's default conservative permissions require explicit consent for every file write or bash command. While secure, this often leads to approval fatigue, prompting some users to take risks with unsafe bypass flags. Auto mode uses an AI classifier to distinguish between benign and dangerous operations automatically.
When active, the system reviews every tool call before execution to prevent actions like mass file deletions or unauthorized data exfiltration. If a task is deemed safe, it proceeds without interruption. If risky, the classifier blocks the action and instructs Claude to find a different, safer path to completion.
To ensure human oversight, the system includes an escalation trigger. If the model attempts blocked actions three times in a row or twenty times total in a single session, it halts and prompts the developer for manual review. This safeguard prevents the AI from getting stuck in loops or persistently pursuing dangerous outcomes.
Developers can enable the feature by running a specific command in the terminal or adjusting settings in the desktop app and VS Code extension. The feature is compatible with both Claude Sonnet 4.6 and Opus 4.6 models. Administrators also have the power to disable this mode globally for their organizations. While Auto mode reduces the risks associated with fully autonomous execution, Anthropic still recommends using it in isolated environments like sandboxes. There may be minor impacts on token cost and latency due to the background classifier's activity. Users can find detailed setup instructions in the official documentation.
The introduction of Auto mode signals a significant shift in the AI industry from static, rule-based permission systems toward dynamic, model-moderated authorization. By moving away from the binary choice of asking for everything or allowing everything, Anthropic is setting a new standard for how agentic AI can operate autonomously in sensitive environments like software development. This move addresses the growing problem of prompt injection and accidental data loss, which has plagued early autonomous agents. As other providers iterate on their own coding tools, the focus will likely shift from purely functional capabilities to the invisible governance layers that make AI reliable at scale.
Related Articles