Cisco AI Defense: Explorer Edition Brings Agentic AI Red Teaming to Builders
Publish Time: 23 Mar, 2026

When we launched Cisco AI Defense early last year, it marked a major milestone in our greater mission to enable secure AI adoption. It was the industry's first comprehensive AI security solution, offering centralized visibility into AI assets, robust algorithmic red teaming for models, and runtime protections for AI applications. 

More recently, the rapid proliferation of agents has sparked significant conversation around the numerous associated risks with their deployment. Last month, we announced updates to AI Defense to combat agentic risk with capabilities like MCP scanning, agentic red teaming, and purpose-built guardrails. 

Still, the broader AI developer community is left grappling with the massive challenge of proactively managing agentic risk. Predicting how an agent will behave is difficult, especially as adversaries employ multi-prompt, multilingual attacks or look to exploit tools and other connected resources. 

To meet this challenge head on, we're introducing Cisco AI Defense: Explorer Edition, a self-service solution offering the same algorithmic red teaming capabilities as Cisco AI Defense Enterprise edition-at no upfront cost to our users. 

Let's take a closer look at what the AI Defense: Explorer Edition can do. 

Test the security alignment of any model

Whether you're building your own model or (more likely) sourcing one from the millions of open-source options available online, red teaming is critical to measure the baseline of its safety and security alignment. 

Cisco AI Defense: Explorer Edition uses algorithmic red teaming to accomplish this in as few as twenty minutes, evaluating model performance in over 200 risk subcategories including intellectual property theft, toxicity, and sensitive data extraction.

Simulate real-world interactions with your agents

From the frameworks and underlying models used to build them to their connected tools and permission scopes, it seems like no two agents are exactly alike. These complexities make it more difficult to predict agentic behavior-and give adversaries a broad surface to attack. 

Fortunately, AI Defense: Explorer Edition offers full support for all major agentic frameworks, model providers, and MCP-connected systems. Our single and adaptive multi-turn tests span a multitude of risk areas, meaning you'll get an incredibly deep understanding of your agent's behaviors. 

For users concerned about specific threats unique to their agent or application, AI Defense supports custom objectives. You just provide a simple, natural language description of the test you want to perform, and our red team agent will handle the rest. 

Understand and personalize AI risk assessments 

Whether you're performing a quick safety and security assessment of an open-source model or a deep analysis of the models and applications behind your agentic workflow, AI Defense: Explorer Edition makes red team test results easy to understand and share. 

At the highest level, comprehensive risk scores give users an idea of how their model or agent performed across different content categories and adversarial techniques. Results are mapped to Cisco's Integrated AI Security and Safety Framework, one of the industry's most comprehensive taxonomies of AI threats. These reports make it easy to measure risk, communicate across AI stakeholders, and understand exactly what guardrails are needed to secure an agentic AI application. 

Get started with Cisco AI Defense: Explorer Edition 

With the launch of Cisco AI Defense: Explorer Edition, we're putting agentic AI red teaming in the hands of builders. Starting today, anyone can use the same algorithmic red teaming capabilities that power our enterprise solutions to test alignment, uncover susceptibility to attacks, and simplify reporting for their own models and agents. 

Ready to break your AI agents before attackers do? Get started with Cisco AI Defense: Explorer Edition here.

I’d like Alerts: