A Developer's First 10 Minutes: Secure a LangChain Agent with Cisco AI Defense
Publish Time: 24 Mar, 2026

The problem

LangChain makes it easy to move from a working prototype to a useful agent in very little time. That is exactly why it has become such a common starting point for enterprise agent development.

Agents don't just generate text. They call tools, retrieve data, and take actions. That means an agent can touch sensitive systems and real customer data within a single workflow.

Visibility alone isn't enough. In real deployments, you need clear enforcement points, places where you can apply policy consistently, block risky behavior, and keep an auditable record of what happened and why.

Why middleware is the right seam

Middleware is the clean integration point for agent security because it sits in the path of agent execution, without forcing developers to scatter checks across prompts, tools, and custom orchestration code.

This matters for two reasons.

  1. It keeps the application readable. Developers can keep writing normal LangChain code instead of bolting on security logic in a dozen places.
  2. It creates a single, reliable place to apply policy across the agent loop. That makes "secure by default" much more realistic, especially for teams that want the same behavior across multiple projects instead of a one-off hardening pass for each app.

Cisco AI Defense + LangChain: how it works

At a high level, Cisco AI Defense Runtime Protection integrates into a LangChain agent through middleware and produces a consistent runtime contract:

  • Decision: allow / block
  • Classifications: what was detected (ex: prompt injection, sensitive data, exfiltration patterns)
  • request_id / run_id: correlation for audit and debugging
  • raw logs: full trace for investigation

There are a few ways to apply that protection, depending on where you want the control to live:

LLM mode (model calls)

  • Protects the prompt/response path around LLM invocation.

MCP mode (tool calls)

  • Protects MCP tool calls made by the agent (where a lot of real-world risk lives).

Middleware mode

  • Protects the LangChain execution flow at the middleware layer, which is often the cleanest fit for modern agent apps.

Integration Diagram:

User → LangChain Agent → Runtime Protection (Middleware) → LLM / MCP Tools

Monitor vs Enforce (the "aha")

Monitor mode gives you visibility without breaking developer flow. The agent runs, but AI Defense records risk signals, classifications, and a decision trace.

Enforce mode turns those signals into a control: policy violations are blocked with an auditable reason. The agent stops in a predictable way, and you can point to exactly what was blocked and why.

Example: "blocked and why"

Blocked

Decision: block

Stage: response

Classifications: PRIVACY_VIOLATION

Rules: PII: PRIVACY_VIOLATION

Event ID: 8404abb9-3ce2-4036-92f9-38516bf7defa

Check out the AI Defense developer quickstart

To make this easy to evaluate, we built a small developer launchpad that lets you run both LLM mode and MCP mode workflows side-by-side in monitor and enforce modes.

3-step quick start (10 minutes)

  • Open the demo runner
    Link: http://dev.aidefense.cisco.com/demo-runner
  • Pick a mode
  • LLM mode (model calls)
  • MCP mode (tool calls)
  • Middleware mode (Langchain middleware)
  • Run a scenario
  • Choose one of the built-in prompts, such as a safe prompt, a prompt injection attempt, or a sensitive data request.
  • Watch the workflow execute side by side in Monitor and Enforce so you can compare behavior against the same input.
  • Monitor: see the decision trace without blocking
  • Enforce: trigger a policy violation and see "blocked and why"

Upstream LangChain Path

We're contributing this integration upstream via LangChain's middleware framework so teams can adopt it using standard LangChain extension points.

LangChain middleware docs:

https://docs.langchain.com/oss/python/langchain/middleware/overview

If you're a LangChain user and want to shape how runtime protections should integrate, we'd welcome feedback and review once the middleware PR is up.

What's next

LangChain is the first integration focus, with the same runtime protection contract extending to additional environments like AWS Strands, Google Vertex Agents and others over time. The goal is consistent: one integration surface, clear enforcement points, and a predictable decision trace, across agent frameworks and runtimes.

I’d like Alerts: