Follow : Add us as a preferred source
on Google.
Key takeaways
- Claude's auto mode reduces permission prompts for developers.
- AI classifier blocks risky commands, such as mass file deletion.
- It's a middle ground between safety controls and full autonomy
Anthropic today is announcing a new "auto mode" for Claude Code that enables the large language model to make permission-level decisions with AI safeguards. The company says this will be a safer option than the "dangerously skip permissions" option that developers use to drive long coding sessions.
Claude permissions
Claude Code is astonishingly powerful. That's because it doesn't just write code. It can also enter the shell commands that coders need to produce results. Those commands include creating directories, moving files, checking updates into GitHub, and more -- including deleting files and directories.
Also: How to install and configure Claude Code, step by step
Because letting an AI run amok on your computer is a terrifying prospect, Claude implements a variety of permission systems. One such protection limits Claude to working in a designated folder hierarchy. In my case, that means working on my Xcode projects, but it has no access to my main Documents folder or other files.
That protection prevents a system-wide disaster, but it doesn't prevent Claude from ruining an entire codebase. And yes, it's done that to me. I love my backups.
Another protection mechanism is where Claude asks permission for anything that might prove problematic, especially all those shell commands. While this is good for protection, it's brutal for productivity. Instead of setting Claude loose to write code and coming back after lunch, you have to approve each command one-by-one. Tedious.
Claude has provided permission tiers, and you can set the level you're comfortable with. Because coders will be coders, there's even a nuclear option, called "dangerously-skip-permissions" that skips permission checks and -- surprise! -- can be dangerous. For one dev's take on how to use this responsibly, here's a good blog post.
Also: I used Claude Code to vibe code a Mac app in 8 hours
As you might imagine, there's a tough trade-off here. Either you let Claude stop work and interrupt you every few minutes, or you let Claude do its thing, which could involve either building something amazing or destroying months of work.
Enter 'auto mode'
This is where Claude Code's new auto mode comes in. Don't get too excited yet. Right now, it's a research preview only available to Team plan users. The company says it's coming to Enterprise plan and API users in "the coming days."
"Auto mode is a middle path that lets you run longer tasks with fewer interruptions while introducing less risk than skipping all permissions," according to the company. "Before each tool call runs, a classifier reviews it to check for potentially destructive actions like mass deleting files, sensitive data exfiltration, or malicious code execution."
Also: 10 things I wish I knew before trusting Claude Code to build my iPhone app
The company says that actions the classifier deems safe proceed automatically, while risky ones are blocked, prompting Claude to take a different approach. If Claude insists on taking actions that are continually blocked, it will eventually trigger a permission prompt to the user.
Safeguards and limitations
The new auto mode classifier looks for potentially risky commands, such as mass file deletion, sensitive data exfiltration, and malicious execution. The company says that risk is reduced, but it's not eliminated. It still strongly advises working in isolated environments.
Also: Claude Code made an astonishing $1B in 6 months
As with all AI activities, auto mode can get confused. Some risky actions might be allowed to execute if the AI doesn't properly understand the context. Benign actions might be blocked from time-to-time.
This doesn't seem exactly like a fox-guarding-the-henhouse-style situation, and adding additional guardrails makes sense. However, auto mode feels like taking away the guardrails while putting up a sign along the edge of the road that says "steep cliff."
Would I use this?
Right now, I can't. I'm on the Claude $100/month Max plan, and that doesn't have access to this feature. But I will definitely admit to having been very frustrated by Claude's insistence on permission reviews when I just want it to do its job.
I do regularly backup my machine, so if auto mode or dangerously skip permissions decides to carpet bomb my code, I can recover. I think if I used either of these features regularly, I'd get into the habit of doing directory zips and extra backup runs before letting the AI loose on anything.
Also: I used Claude Code to vibe code an Apple Watch app in just 12 hours
As I write this, I'm thinking I'd probably prefer to run auto mode than the completely open "dangerously skip permissions" option. I'd like the added productivity, but I'd also prefer some guardrails in place. So, when it becomes available to my plan, I will most likely give it a try.
Controlled autonomy for coding workflows
Right now, the auto mode feature is compatible with Sonnet 4.6 and Opus 4.6 models. Again, it's only available to Teams users on launch. Anthropic says, "Auto mode may have a small impact on token consumption, cost, and latency for tool calls."
I think we'll see this approach improve over time. After all, Claude Code is barely a year old, and it's changed the coding world tremendously in that time. It's also improved by leaps and bounds in that time.
So while auto mode lets coders make a trade-off between convenience and computational overhead, it will likely be part of the overall development stack once it matures.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.
Artificial Intelligence
-
I tried a Claude Code alternative that's local, open source, and completely free - how it works
-
How to remove Copilot AI from Windows 11 today
-
AI is quietly poisoning itself and pushing models toward collapse - but there's a cure
-
How to spot an AI image: 6 telltale signs it's fake - and my go-to free detectors
