Complexity: IntermediateType: Safety

Emergency Stop

A pre-filter layer that intercepts high-risk, malformed, or unsafe inputs before they reach the AI, or outputs before they reach the user back. It works as a kill switch: halting the interaction instantly, showing a safe fallback, and preventing the model from even attempting to process dangerous content.

1. Trust Challenge

What is the core risk to user trust, and when does it matter most?

AI systems can unintentionally respond to harmful or destabilizing inputs if they aren't screened beforehand. Without a guard that can stop critical failures early, the model may attempt to answer:

  • Emergency or crisis content
  • Self-harm or harm-to-others disclosures
  • Illegal, abusive, or violent intent
  • Highly malformed, non-parsable, or corrupted input
  • Requests that would lead to system errors or unpredictable behavior

If the AI tries to "be helpful" in any of these scenarios, the result can be both unsafe and trust-breaking.

Critical moments where this pattern matters most:

  • Crisis Detection: A user expresses harm to self or others and the system must stop responding conversationally and redirect to human or emergency resources.

  • Severe Policy Violations: Inputs that demand illegal actions or breach platform rules.

  • Structural Corruption: Inputs that are too malformed or ambiguous to interpret (system-level gibberish, broken tool calls, malformed JSON payloads).

  • System Faults: Model timeouts, corrupted responses, or upstream failures where continuing interaction would confuse or mislead users.

Without an Emergency Stop, the system continues "trying to answer" when it should decisively halt, opening users to potential harm.

2. Desired Outcome

What does 'trust done right' look like for this pattern?

A well-implemented Emergency Stop ensures the AI never proceeds when the safest action is to pause.

Immediate Interception

Unsafe input never reaches the AI model; the flow is stopped at the gate.

Safe Fallback

Users see a calm, directive message explaining that the system cannot continue.

Clear Next Steps

The user is redirected to safe alternatives: Human support, crisis resources, compliant pathways, or a restart.

Predictable Behavior

The system stops in the same way every time, reducing ambiguity in sensitive moments.

Success State

Users feel reassured that the system won't say something harmful or misleading. When a request crosses a line, it clearly stops and guides them to safer options instead of trying to guess.

3. Implementation Constraints

What limitations or requirements shape how this pattern can be applied?

To apply Emergency Stop effectively, you need:

Requirements

  • High-risk Classifier: A deterministic or ML-based filter that can detect crisis language, illegal intent, abusive content, or malformed requests.
  • Immediate Short-circuiting: A mechanism that halts the pipeline before generation, before any tools or reasoning steps executed.
  • Fallback Templates: Pre-defined, vetted messages appropriate for each failure mode (crisis, illegal request, corruption, unknown error).

Constraints / Limitations

  • Over-triggering: Too sensitive a filter can block normal requests and frustrate users.
  • Localization & Sensitivity: Crisis or high-risk phrasing varies by language and culture; classifiers must be validated on relevant linguistic contexts.
  • Human Pathways: The redirect must be appropriate: crisis flows need real resources, not vague suggestions.

4. Pattern in Practice

What specific mechanism or behavior will address the risk in the product?

Core mechanism:

The Emergency Stop sits as a first-pass gate before any AI reasoning or tool use. The flow:

Receive InputUser input enters the system.
Screen for RiskHigh-risk classifier scans for crisis language, illegal intent, malformed input, or corruption patterns.
If Risk DetectedDo not send the prompt to the LLM. Do not call any tools. Immediately return a safe fallback response with clear next steps.
If SafePass the input forward into the normal LLM pipeline.

Behavior in the UI / conversation:

From the user's perspective, Emergency Stop shows up as a single, clear interruption instead of a normal AI answer:

  • A short message that explains, at a high level, why the system can't continue with this request.
  • A pointer to what the user can do next (e.g., contact a human, use crisis resources, retry with a different kind of question).
  • No extra conversational back-and-forth from the AI after the stop—the safeguard is the last word for that input.

The system decisively halts instead of trying to "be helpful" in harmful ways.

Use these components to visualize the effect of the emergency stop.

1. Emergency Alert Card

Purpose: Main visible surface when a hard stop is triggered.

Structure: Prominent card or modal that replaces expected response.

Key Elements:

  • Icon + title: "Safety Notice", "We can't continue with this request"
  • 1–3 lines of explanation
  • Primary action: e.g., "Get help", "Contact support", "Try again"

Style: Prominent but calm; supportive tone, high contrast, limited text.

2. Soft Halt Toast

Purpose: For non-critical stops (e.g., malformed input, transient system error).

Structure: Small toast or banner at top/bottom of viewport.

Key Elements:

  • Short text: "We couldn't process this request. Please try again."
  • Optional "Retry" button

Style: Subtle banner or toast, non-alarming.

These components provide decisive, calm feedback that guides users to safe next steps without creating panic or confusion.

5. Best Used When

In which contexts does this pattern create the greatest trust value?

Emergency Stop is especially valuable when:

Crisis or Distress Messages

Your assistant may receive messages about self-harm, harm to others, or emergency situations requiring immediate human intervention.

Real-World Actions

The AI can control or influence transactions, devices, critical systems, or other high-stakes operations.

Structured Commands

Inputs can include tool calls or structured commands where malformed requests might cause unpredictable or unsafe behavior.

Regulatory Risk

There are regulatory or reputational risks if the AI responds inappropriately to certain topics or illegal requests.

In these settings, knowing the system will decisively stop—and not improvise—adds a meaningful layer of safety.

6. Use With Caution

When could applying this pattern create friction or unintended effects?

Risks and Anti-Patterns:

Over-protection

Blocking too frequently will frustrate users and make them feel censored or that the system is broken.

Vague Warnings

A generic "I can't help with that" without context or next steps creates confusion and erodes trust.

Insensitive Crisis Messaging

Tone must be non-judgmental and supportive; poorly phrased alerts can worsen user distress in crisis situations.

To use this pattern safely:

  • Tune filters based on real usage data: Monitor false positive rates and adjust thresholds.
  • Match crisis copy to local norms and verified resources: Ensure appropriate cultural sensitivity and real help pathways.
  • Support human review/override processes: Avoid over-triggering by having humans audit and adjust stop logic.

7. How to Measure Success

How will we know this pattern is strengthening trust?

North Star Metric

Emergency Stop Activation Accuracy

The percentage of Emergency Stop activations that were later confirmed, via audit reviews, to be appropriate.

False Positive Rate

Percentage of stops that blocked legitimate, safe requests—indicates over-sensitivity.

Crisis Redirect Success Rate

Percentage of users who engaged with provided crisis resources after a stop.

Stop-to-Recovery Time

Average time from Emergency Stop to user re-engagement with safe request.

User Feedback on Stops

Qualitative feedback specifically on Emergency Stop messages—tone, clarity, and helpfulness.