Frequently Asked Questions
Overview
What is ThirdLaw?
ThirdLaw is a runtime enforcement and investigation platform for AI-enabled applications and agents. It captures AI interactions, evaluates them against enterprise policy (“Lawsˮ), and can take actions such as monitoring, blocking, redacting, or escalating when policy is violated. It also preserves investigation-ready records so security and IT teams can reconstruct what happened, what controls ran, and what actions were taken.
Who is ThirdLaw for?
ThirdLaw is designed for Security and IT teams that need production controls for LLM applications and agentic workflows. After connecting ThirdLaw to applications and infrastructure, Security teams define and manage policies. The goal is consistent policy enforcement across AI systems and faster incident response when AI behavior becomes risky.
What problems does ThirdLaw address?
ThirdLaw helps reduce operational AI risk such as prompt injection and jailbreak attempts, sensitive data exposure, unsafe or unauthorized tool use by agents, policy-violating outputs, and missing audit evidence for investigations. It is built for real-time decisions about whether a specific prompt, output, or action should be allowed.
Does ThirdLaw support both workforce AI and custom AI applications?
ThirdLaw is designed to support both, depending on where your organization can integrate and observe AI interactions. Most teams start with the highest-risk applications or workflows first, then expand coverage as policies and operational processes mature
Is ThirdLaw an AI firewall, GenAI gateway, LLM observability tool, AI-SPM, or AI governance platform?
ThirdLaw is closest to AI runtime enforcement (sometimes described as an AI firewall / guardrails layer) with investigation and SecOps workflow integration. It can complement:
- LLM observability tools which emphasize tracing, latency, token usage, and debugging, typically aimed at Developers.
- AI-SPM tools which emphasize posture management and visibility across the AI estate.
- AI governance tools which emphasize governance program workflows and risk management.
Getting Started
What’s the quickest first use case?
Start with a single policy (“Law”) that answers one measurable question, scoped to one application route or one agent workflow. This makes it easy to validate detection quality, understand hit rate, and tune exceptions before enabling runtime actions more broadly.
Can I start in monitor-only mode?
Yes. Most teams start monitor-only to see what would have triggered, measure false positives, and tune policies. When ready, enable runtime actions for a narrow scope first (one route, one role, one workflow) and expand over time.
How do I start: gateway, SDK, agent framework, or OpenTelemetry?
ThirdLaw can be integrated at different points depending on where you want to observe and control AI behavior. Many teams start at the boundary where AI traffic is already centralized, then expand coverage as they add agent/tool visibility.
- Gateway/proxy: enforce policy where requests and responses pass through a single control point.
- SDK instrumentation: enforce inside application code paths with fine-grained context (route, user role, workflow step).
- Agent/tool boundary: control tool calls, parameters, and high-risk agent actions.
- Telemetry ingestion (e.g., OpenTelemetry): collect interaction signals for investigation and monitoring when inline enforcement is not required.
For the current integration options and setup steps, see the documentation.
Deployment, data, and operations
What deployment options are available?
ThirdLaw supports multiple deployment models, including managed single-tenant deployments and customer-managed or self-managed deployments (for example, in your Kubernetes environment), depending on requirements. The same core workflow applies across models: capture interactions, evaluate policy, and take action when policy is violated.
Where does ThirdLaw run and where is data stored?
Where ThirdLaw runs and where data is stored depends on the deployment model you choose. ThirdLaw is designed for clear data boundaries and strong data control through deployment choice and policy scoping, so you can match enforcement and evidence capture to your security requirements.
Can ThirdLaw run in my private network with no public internet exposure?
ThirdLaw can support private networking patterns depending on deployment and configuration. The common approach is to keep enforcement close to your apps or gateways and set clear data boundaries through deployment choice and scoped collection.
What does ThirdLaw collect: events, transactions, and sessions?
ThirdLaw organizes AI activity at multiple levels. An event is an atomic interaction (prompt, output, tool call). A transaction groups related steps of work. A session ties related transactions across a longer interaction. This structure supports both runtime controls and forensic reconstruction.
Does ThirdLaw support data residency (region-specific storage/processing)?
Data location depends on the deployment model and region you choose. If you have residency requirements, select a deployment approach and region that meet your requirements and scope what you capture to the minimum needed for enforcement and investigations.
What latency impact should I expect?
Latency depends on what you evaluate inline, where you integrate, and how you chain fast checks versus deeper checks. ThirdLaw supports both inline evaluation (for strict runtime enforcement) and asynchronous evaluation (for deeper checks with lower user-facing latency impact), so teams can balance rigor and performance.
Can I mask or redact sensitive fields?
ThirdLaw supports scoping and data minimization controls so teams can reduce exposure of sensitive content. Depending on configuration and integration, you can apply redaction and limit what is stored and what is forwarded downstream.
Does ThirdLaw support agent approvals for high-risk actions?
ThirdLaw supports interventions that can include gating high-risk actions, depending on configuration and integration. This pattern is commonly used to require review for sensitive tool calls or actions while preserving a complete evidence trail for audits and investigations.
Policy and enforcement
Does ThirdLaw support multimodal (text, documents, images, audio/video)?
ThirdLaw supports AI workflows that include more than plain text, such as interactions that reference documents or other attached artifacts. What ThirdLaw can evaluate and enforce depends on your integration surface and which
evaluations you enable. Many teams start by enforcing policy on text prompts and outputs, then extend coverage to additional artifacts and agent actions as needed.
For the current supported modalities and evaluation options, see the documentation.
How do you roll out policy changes safely (monitor-only, staged rollout, canary)?
Most teams start in monitor-only mode to measure what would have triggered, review false positives, and tune thresholds before enabling enforcement. Teams typically roll out enforcement by narrowing scope first (one route, one role, one agent workflow), then expanding coverage.
ThirdLaw policies are managed as explicit objects with lifecycle status and audit change records, so you can track what changed and what was active at a point in time.
How do you test a Law before production?
Teams typically test Laws in a staged workflow so they can measure impact and reduce false positives before enforcing actions in production.
- Author and scope the Law (app, route, environment, role, model, agent, tool) and choose the evaluations it will run.
- Test on examples (historical interactions or a curated test set) to validate logic and thresholds.
- Run monitor-only on real traffic to measure hit rate, review findings, and tune exceptions.
- Roll out enforcement gradually Start with one high-risk path or workflow, then expand scope as confidence increases.
What does ThirdLaw evaluate: prompts, outputs, context, and tool calls?
ThirdLaw evaluates the parts of an AI interaction that are captured by your integration surface. Most teams evaluate prompts and outputs first, then extend policies to tool calls and retrieval flows for agents and RAG systems.
- Prompts and outputs (text and attached artifacts where enabled)
- Tool calls and parameters (agent actions)
- Retrieved context (RAG inputs/outputs) where integrated
Can I sample or exclude certain traffic from evaluation to control cost and exposure?
Yes. Evaluations occur only when content is evaluated by active policy. Use policy scope to exclude routes/apps/environments or keep evaluations inactive for out of-scope traffic so it is not evaluated.
How do you handle false positives?
Teams tune policies iteratively by adjusting thresholds, adding conditions, and applying scoped exceptions. A common strategy is to detect broadly but enforce narrowly, then expand enforcement as you improve precision and operational confidence.
We use multiple LLM providers. Can we enforce one policy everywhere?
ThirdLaw is designed to enforce the same Laws across multi-model environments when the interaction is observable at your integration point.
How do I prevent an AI agent from calling dangerous tools or endpoints?
Capture tool calls and enforce policies on allowed tools, conditions, and parameters. Scope policies by app/agent/role/environment and add interventions like block, require approval, or escalate when a tool call is high risk.
Where does enforcement happen?
Enforcement happens at the point ThirdLaw is integrated, such as a gateway/proxy, within application code, or at an agent/tool boundary. This makes enforcement tied to the real interaction and supports auditability of what was evaluated and what action occurred.
How do I detect and block prompt injection in production?
Use a Law that evaluates prompts/outputs (and agent tool behavior) for injection risk, tune in monitor-only, then enable block/redact/escalate on high-risk flows. ThirdLaw includes starter Laws and tested detection models for common categories to accelerate rollout, and teams can customize scope and actions.
What runtime actions can ThirdLaw take?
ThirdLaw supports configurable actions at runtime, such as monitoring, blocking, redacting, rerouting, or escalating to review. Which actions are available depends on integration surface and configuration. Actions are designed to occur before the interaction causes harm, not after-the-fact.
What evaluation methods are supported?
ThirdLaw supports multiple evaluation methods so teams can match technique to policy, latency, and accuracy needs. Common approaches include patterns/rules, similarity, classifiers, and model-based validation. Teams can chain methods, running fast checks first and deeper checks only when needed.
How do you scope policies safely?
Policies are designed to be scoped so enforcement is precise and predictable. Teams typically scope by application and environment and may also scope by route, role, model, agent, or tool depending on integration. Start narrow to reduce blast radius, then expand as confidence increases.
What is an Evaluation?
An Evaluation is a reusable detection module that answers a specific question (for example, “Does this contain sensitive data?”) using a chosen method such as patterns/rules, semantic similarity, classifiers, or model-based validation. Each Evaluation produces a structured finding used by Laws to make enforcement decisions.
What is a “Law”?
A Law is a policy object that defines scope, conditions, and what should happen when policy is met or violated. A Law can apply one or more detection modules (“Evaluations”) and then determine whether to monitor, block, redact, escalate, or take other configured actions.
Integrations and workflows
Does ThirdLaw replace a SIEM, SOAR, or ITSM platform?
No. ThirdLaw is designed to generate AI-specific policy signals and supporting context, then route them into existing response workflows. Use SIEM/SOAR/ITSM platforms for alert consolidation, case management, and automation; use
ThirdLaw for AI runtime enforcement and investigation-ready evidence.
What does ThirdLaw send downstream to security workflows?
ThirdLaw can send policy violations and supporting context needed for triage, such as what triggered, which policy was involved, severity, and correlation identifiers. The goal is fast triage in existing tools with the ability to drill into a complete investigation record when needed.
Is ThirdLaw model-provider agnostic?
Yes. ThirdLaw is designed to work across multi-model, multi-provider environments. If ThirdLaw can observe the interaction at your integration point, it can evaluate and enforce policy independent of the underlying model vendor.
Security and trust
What security controls does ThirdLaw support?
ThirdLaw is designed for enterprise environments and supports common security expectations such as access controls, auditability, and encryption, depending on deployment and configuration. It is intended to fit into security operations workflows and provide evidence suitable for investigations and audits.
What evidence does ThirdLaw preserve for audits and investigations?
ThirdLaw preserves interaction records (prompts, outputs, tool calls, agent actions) and policy records (what evaluated, what was decided, what action occurred). This is designed to support incident response, audit evidence, and post-incident improvements to policies.
Can I reconstruct a full incident timeline for an AI session?
Yes. ThirdLaw records timelines across prompts, outputs, and tool calls and organizes activity into events, transactions, and sessions to support forensic reconstruction and triage.
How long is data retained, and what export options exist for audits or investigations?
Retention and export options depend on your deployment model and configuration. ThirdLaw is designed to preserve investigation-ready records of AI interactions and policy decisions, and to support exporting findings and
supporting context into downstream workflows used for reporting, incident response, or audit evidence. For current retention behavior and export destinations/formats, see the
documentation.
What security controls does ThirdLaw support (SSO/SAML, SCIM, audit logs, encryption)?
ThirdLaw is designed for enterprise security operations. Security controls vary by deployment model and configuration. ThirdLaw is designed to support enterprise identity, auditability, and data protection. If enabled for your deployment, controls may include:
- Single sign-on (SSO) and role-based access controls
- Provisioning support (e.g., SCIM)
- Audit logs for administrative and policy changes
- Encryption controls (in transit and at rest, depending on deployment)
For the current control set by deployment option, see the security documentation.
Standards and governance
How does ThirdLaw relate to the OWASP Top 10 for LLM Applications?
ThirdLaw can help address several common LLM application risks by enforcing policy at runtime and preserving investigation-ready evidence across prompts, outputs, and (where integrated) tool calls and retrieval flows. Coverage depends on what you instrument and which evaluations and actions you enable.
Teams commonly use ThirdLaw to reduce risks such as prompt injection, sensitive data exposure, unsafe tool use, and gaps in audit evidence for incidents.
How does ThirdLaw support NIST-style AI risk management?
ThirdLaw supports AI risk management by operationalizing policy in production. It can help teams implement controls (monitoring and runtime interventions) and produce evidence of what happened and what controls were active at the time. ThirdLaw is not a compliance program by itself; it supports programs by providing enforceable controls and auditable records.
Coverage depends on deployment, integration points, and which policies you enable.
How are policies versioned, and can I prove what was enforced at a point in time?
ThirdLaw policies are managed as explicit objects with both lifecycle status, runtime states, and audit change records so teams can track changes over time and understand which policy logic was active when an interaction occurred. This supports audits, incident reviews, and controlled rollouts.
Pricing and billing
What is an App Under Control (pricing unit)?
An App Under Control is one AI-enabled application or agent workflow that sends or receives content from an LLM and is covered by ThirdLaw Laws. Plans are packaged by the number of Apps Under Control.
What is Scope?
Scope defines where a Law applies, for example specific users, applications, LLM models/providers, agents, or business contexts. Scope can be global or restricted to specific contexts.
See “Policy and enforcement” for definitions of Laws and Evaluations.
What is an Inspection (pricing metric)?
An Inspection measures how much AI interaction text is evaluated under active policy. Inspections scale with both the volume of text evaluated (tokens) and the number of Evaluations applied.
How is App Under Control different from Scope?
Apps Under Control is the commercial packaging unit. Scope is how you target a Law to specific contexts within or across those apps (for example “Customer Service app using OpenAI” vs “Customer Service app using Anthropic” vs “all models used by the Customer Service app”).
Are Custom Laws metered?
Custom Laws are included. Pricing is driven by Apps Under Control and Inspections consumed when Evaluations run on interactions that match active Law Scope.
What is an Inspection (usage metric)?
An Inspection measures how much interaction content is evaluated under active policy. Inspections scale with the amount of evaluated content (tokens) and the number of Evaluations that run.
How are Inspections calculated?
Inspections are calculated as: (tokens processed ÷ 1,000) × active evaluations. An active evaluation is any Evaluation that actually runs on an interaction based on Scope and execution mode.
What is a token (for pricing)?
A token is a unit of text processed by language models and evaluators. For pricing, “tokens processed” refers to the amount of interaction content evaluated by active policy (for example prompts and outputs, and tool-call content if captured and evaluated).
How do Laws and Evaluations affect pricing?
Inspections increase when more Evaluations run on more interaction content. Laws decide which Evaluations to run and when to run them (using Scopes).
What counts toward tokens and Inspections?
Only evaluated content counts. Prompts and outputs count when evaluated by an active Law. Tool calls and tool outputs count if captured and evaluated. Interactions that do not match any active Law Scope (and are not evaluated) do not count.
How much does ThirdLaw cost?
Trial is a 30-day evaluation. Scale and Enterprise are annual subscriptions packaged by Apps Under Control with an included Inspections allowance. If evaluation volume is unusually high, additional capacity is scoped during the engagement.
How ThirdLaw compares
How is ThirdLaw different from model-provider safety filters and built-in guardrails?
Built-in guardrails typically apply within a provider boundary and focus on safety filtering for prompts and outputs. ThirdLaw is an application-layer runtime enforcement and evidence platform that evaluates interactions at your integration points and can enforce enterprise policies across prompts, outputs, and agent/tool behavior. It also preserves investigation-ready records to support security workflows.
How is ThirdLaw different from an AI gateway or proxy?
Gateways and proxies often focus on traffic management, routing, and standardization across model providers. ThirdLaw can be deployed at the gateway layer, but its primary focus is policy evaluation, runtime intervention, and evidence across the interaction lifecycle, including tool calls and agent actions where integrated. Many teams use both: a gateway for traffic management and ThirdLaw for policy enforcement and security workflows.
How is ThirdLaw different from content moderation?
Content moderation typically labels content against safety categories and returns allow/deny or classifications. ThirdLaw can use similar detection methods, but it applies enterprise-specific policies in context (workflow step, route, role, tool), takes runtime actions, and preserves evidence for investigations. This makes it suitable for operational and security policy enforcement, not only content safety.
How is ThirdLaw different from AI governance platforms?
Governance platforms often manage program workflows like approvals, documentation, and inventory. ThirdLaw operationalizes policies in production by evaluating interactions and applying runtime interventions when needed, while preserving audit-ready evidence. Governance defines policy and accountability; ThirdLaw implements those policies as runtime controls.
How is ThirdLaw different from observability and tracing for LLM apps?
Observability and tracing help engineering teams debug and optimize systems (latency, errors, traces, cost). ThirdLaw is designed for security and operational control: it evaluates interactions against policy, can intervene at runtime, and routes violations with context into security workflows. Observability explains behavior; ThirdLaw decides whether behavior is allowed and preserves evidence when it is not.
How is ThirdLaw different from traditional DLP (Data Loss Prevention)?
Traditional DLP focuses on controlling sensitive data movement across broad enterprise channels. ThirdLaw applies AI-specific policy controls inside LLM applications and agent workflows, such as detecting sensitive data in prompts, outputs, and tool calls and enforcing actions like redaction or escalation in the AI interaction itself. Many teams use DLP for broad channels and ThirdLaw for AI runtime enforcement
How is ThirdLaw different from AI posture management tools?
Posture tools generally focus on discovery and configuration risk across AI assets. ThirdLaw focuses on runtime enforcement and investigations for real AI nteractions in production. If posture tools answer “what do we have and how is it configured,” ThirdLaw answers “should this prompt, output, or agent action be allowed right now” and provides evidence when policy is violated.
What does ThirdLaw not do (and where should I use other tools)?
ThirdLaw focuses on runtime policy enforcement, investigation, and evidence. It does not replace developer debugging tools, posture management tools, or governance program platforms.
- Use LLM observability tools for developer tracing/debugging and performance analytics
- Use AI-SPM for discovery and posture visibility across the AI estate
- Use governance platforms for program workflows and risk management documentation
- Use ThirdLaw when you need runtime controls + evidence + SecOps workflows
Glossary
What is a Finding vs a Violation?
A Finding is the structured output of an Evaluation (label, score, or result). A Violation is a policy decision made by a Law based on findings and conditions and may trigger a runtime action.
What is an Intervention?
An Intervention is a runtime action taken when policy requires it, such as blocking, redacting, rerouting, or escalating for review.
What is policy scope?
Scope defines where a Law applies so enforcement is precise and predictable. Teams use scoping to reduce blast radius and control rollout.
What is inline vs asynchronous evaluation?
Inline evaluation runs in the request/response path to enable strict runtime enforcement. Asynchronous evaluation runs out of band for deeper checks and lower user-facing latency impact.
Common AI security questions
What is AI runtime enforcement?
A control layer that evaluates AI interactions in real time and can block/redact/escalate before harm occurs.
What is prompt injection?
Crafted input intended to hijack instructions or cause unauthorized behavior; mitigate with detection plus runtime controls.
How do you prevent AI data leakage?
Detect PII/secrets in prompts/outputs/tool calls and enforce redaction or blocking with scoped policies.
Can I start with monitoring only?
Yes. ThirdLaw allows any Evaluation to be run in monitor-only to tune false positives before enabling enforcement.
Where does policy enforcement happen?
At your integration point (gateway, SDK, or agent/tool boundary).
What is an Inspection?
A usage unit combining evaluated tokens and how many evaluations ran.
What evidence do I get for audits?
Interaction, policy, decision, and action records with timestamps and identifiers.
What is AI TRiSM?
Gartner’s term for AI trust, risk, and security management covering governance, robustness, and data protection.
What is NIST AI 600-1?
NIST’s Generative AI Profile companion to the AI RMF, describing risks and actions for GAI.
How do I stop prompt injection attacks in production without breaking my app?
Start monitor-only on your highest-risk routes, tune thresholds and exceptions, then enforce a Law that blocks/redacts/escalates injection-like behavior at the gateway/SDK boundary.
How can I keep agents from exfiltrating data through tool calls?
Capture tool calls and parameters, restrict allowed tools by role/environment, and require approval for high-risk actions; evaluate retrieved context where integrated.
What do I need to show auditors about AI usage and controls?
Preserve an evidence chain: prompts/outputs/tool calls, which policies/evaluations ran, decisions made, and interventions taken with timestamps and identifiers.
Should AI Be Allowed to Do That?
See how ThirdLaw helps Security and IT teams make enterprise AI safer to run.
