Frequently Asked Questions
Overview
Is ThirdLaw an AI firewall, GenAI gateway, LLM observability tool, AI-SPM, or AI governance platform?
ThirdLaw is closest to AI runtime enforcement (sometimes described as an AI firewall / guardrails layer) with investigation and SecOps workflow integration. It can complement:
- LLM observability tools which emphasize tracing, latency, token usage, and debugging, typically aimed at Developers.
- AI-SPM tools which emphasize posture management and visibility across the AI estate.
- AI governance tools which emphasize governance program workflows and risk management.
Does ThirdLaw support both workforce AI and custom AI applications?
ThirdLaw is designed to support both, depending on where your organization can integrate and observe AI interactions. Most teams start with the highest-risk applications or workflows first, then expand coverage as policies and operational processes mature
What problems does ThirdLaw address?
ThirdLaw helps reduce operational AI risk such as prompt injection and jailbreak attempts, sensitive data exposure, unsafe or unauthorized tool use by agents, policy-violating outputs, and missing audit evidence for investigations. It is built for real-time decisions about whether a specific prompt, output, or action should be allowed.
What problems does ThirdLaw address?
ThirdLaw helps reduce operational AI risk such as prompt injection and jailbreak attempts, sensitive data exposure, unsafe or unauthorized tool use by agents, policy-violating outputs, and missing audit evidence for investigations. It is built for real-time decisions about whether a specific prompt, output, or action should be allowed
Who is ThirdLaw for?
ThirdLaw is designed for Security and IT teams that need production controls for LLM applications and agentic workflows. After connecting ThirdLaw to applications and infrastructure, Security teams define and manage policies. The goal is consistent policy enforcement across AI systems and faster incident response when AI behavior becomes risky.
What is ThirdLaw?
ThirdLaw is a runtime enforcement and investigation platform for AI-enabled applications and agents. It captures AI interactions, evaluates them against enterprise policy (“Lawsˮ), and can take actions such as monitoring, blocking, redacting, or escalating when policy is violated. It also preserves investigation-ready records so security and IT teams can reconstruct what happened, what controls ran, and what actions were taken.
What is an AI Control System?
An AI Control System includes mechanisms, frameworks, and processes designed to monitor, regulate, and constrain artificial intelligence (AI) systems to ensure they operate within predefined boundaries and do not cause unintended or harmful consequences. These systems include both technical methods (e.g., algorithms, guardrails) and governance practices (e.g., policies, audits) to align AI behavior with human objectives and safety requirements.
What is ThirdLaw?
ThirdLaw is a runtime enforcement and investigation platform for AI-enabled applications and agents. It captures AI interactions, evaluates them against enterprise policy (“Lawsˮ), and can take actions such as monitoring, blocking, redacting, or escalating when policy is violated. It also preserves investigation- ready records so security and IT teams can reconstruct what happened, what controls ran, and what actions were taken
What challenges does the ThirdLaw platform address?
ThirdLaw tackles risks associated with LLMs, such as compliance violations, data security gaps, operational unpredictability, and lack of control over how models are used.
How is ThirdLaw different from other LLM security, observability or governance tools?
ThirdLaw combines observability, governance, and security for LLMs in one platform. It stands out by: Offering real-time enforcement of AI safety rules (vs. static compliance checks). Bridging observability with actionable governance for enterprise IT workflows (integrating with tools like Splunk and Datadog). Providing no-code guardrails that combine semantic analysis and pattern detection.
What can I do with the ThirdLaw platform?
Use ThirdLaw to monitor, govern, and enforce compliance on LLM usage in real-time, protecting your organization from risks like misuse, data leaks, and unsafe outputs.
What are the services offered by ThirdLaw?
ThirdLaw offers four core services: Collect: Captures LLM activity logs, providing full visibility. Evaluate: Analyzes LLM inputs and outputs for compliance, safety, and performance. Intervene: Enforces real-time controls to block risky behavior. Investigate: Reconstructs full LLM exchanges and sessions to enable root cause analysis, audit trails, and incident response.
Getting Started
How do I start: gateway, SDK, agent framework, or OpenTelemetry?
ThirdLaw can be integrated at different points depending on where you want to observe and control AI behavior. Many teams start at the boundary where AI traffic is already centralized, then expand coverage as they add agent/tool visibility.
- Gateway/proxy: enforce policy where requests and responses pass through a single control point.
- SDK instrumentation: enforce inside application code paths with fine-grained context (route, user role, workflow step).
- Agent/tool boundary: control tool calls, parameters, and high-risk agent
- actions.
- Telemetry ingestion (e.g., OpenTelemetry): collect interaction signals for investigation and monitoring when inline enforcement is not required.
For the current integration options and setup steps, see the documentation.
Can I start in monitor-only mode?
Yes. Most teams start monitor-only to see what would have triggered, measure false positives, and tune policies. When ready, enable runtime actions for a narrow scope first (one route, one role, one workflow) and expand over time.
What’s the quickest first use case?
Start with a single policy (“Law”) that answers one measurable question, scoped to one application route or one agent workflow. This makes it easy to validate detection quality, understand hit rate, and tune exceptions before enabling runtime actions more broadly.
How do I get started with ThirdLaw?
Once installed in your VPC, you can get started with ThirdLaw in under 5 minutes. First, choose your collection point. There are a wide number of options available including ThirdLaw SDKs, Plug-ins for existing gateways, ThirdLaw APIs for ingestion and OTEL-based Collectors. Next, login to the ThirdLaw Console to Create a New Scope: Select an existing Law or create a new one to apply within this Scope Test Laws in the Trial Environment: Set the status of your Scope to Active. By following these steps, you can set up ThirdLaw in your LLM application within minutes, ensuring robust governance and control over your AI outputs.
Platform
Does ThirdLaw offer an on-premise deployment option?
Not at this time. ThirdLaw is designed to be deployed in customer managed VPCs within AWS.
Which 3rd party data storage platforms does ThirdLaw support?
ThirdLaw support AWS S3 and Azure Blob Storage
What Data Types Does ThirdLaw Support?
ThirdLaw processes a wide range of data types: Textual Data: Plain text and structured documents (e.g., chatbot logs, user queries). Images: AI-generated images or content moderation tasks (e.g., screenshots). Audio: Speech-to-text transcription (e.g., call recordings). Video: Video transcription and summarization (e.g., live-stream monitoring). Code: Programming scripts and files used for development or analysis. Numerical Data: Structured numbers used for calculations, predictions, or analysis.
Can I use ThirdLaw Collect without using ThirdLaw Evaluate or ThirdLaw Intervene?
Yes, ThirdLaw Collect can function independently, allowing you to gather and monitor LLM interactions without necessarily employing the Evaluate or Intervene modules.
What are my deployment options for ThirdLaw?
ThirdLaw is deployed within your VPC, provisioned and run as a service. This allows customers to maintain and protect your data within your environment while enabling the properties of a service. Some might call this configuration "Bring Your Own Cloud" or "Customer Managed Infrastructure".
How does ThirdLaw monitor LLM behavior in real-time without impacting application performance?
ThirdLaw uses lightweight monitoring agents that capture and process data in configurable ways, including asynchronously, ensuring configurable latency while providing actionable insights or interventions.
Does ThirdLaw support auditable data collection?
ThirdLaw provides a detailed log of all LLM interactions, capturing inputs, outputs, and context. You can use the platform's intuitive interface to search, filter, and review these interactions for compliance, performance, and risk analysis, ensuring full auditability. ThirdLaw records all edits and activities performed by ThirdLaw administrators as well.
Can ThirdLaw be used alongside traditional observability tools like Datadog or Splunk?
Yes, ThirdLaw complements traditional observability tools by focusing on LLM-specific risks, offering integrations to export insights and alerts for broader system monitoring.
How does ThirdLaw proactively prevent risks compared to reactive observability platforms?
ThirdLaw enforces real-time interventions, such as blocking unsafe outputs or alerting on policy breaches, rather than relying solely on post-event analysis.
Integrations
Which security tool integrations does ThirdLaw support?
ThirdLaw integrates with Splunk Enterprise, Splunk Enterprise Security (SIEM), and Splunk SOAR. ThirdLaw's Roadmap includes similar work with other market leaders in the XDR and Observability markets.
Which LLM models does ThirdLaw collect data from?
OpenAI and Azure OpenAI, Anthropic, Mistral AI, Google Generative AI (Gemini), AWS Bedrock, Google Vertex AI
Which application interfaces and programming languages can ThirdLaw collect data from?
ThirdLaw has support for Python and TypeScript SDKs with additional programming languages on the roadmap.
Which existing Gateways does ThirdLaw support?
ThirdLaw currently support a variety of versions of Kong API Gateway, Envoy, LiteLLM, and NGINX
Using The Product
How does ThirdLaw handle multi-modal LLMs (e.g., text, images, or code)?
ThirdLaw supports multi-modal LLMs by applying tailored guardrails and monitoring logic for each modality, ensuring consistent risk mitigation across text, images, and code.
Does ThirdLaw require retraining or modifying my current LLMs to implement its guardrails?
No, ThirdLaw operates as a layer on top of your existing LLMs, enabling guardrails and monitoring without retraining or modifying the underlying models.
What kind of analytics and reporting does ThirdLaw provide on LLM interactions?
ThirdLaw offers dashboards and reports with metrics like input-output patterns, alert condition reports, and trend analyses for performance and safety.
What level of customization is possible with ThirdLaw's monitoring and guardrails?
ThirdLaw allows deep customization, including bespoke rules, semantic similarity thresholds, and integration with enterprise-specific compliance workflows.
Pricing and Billing
How much does ThirdLaw cost?
ThirdLaw is sold as an annual subscription that includes a monthly allowance of Inspections. Pricing scales with usage based on Inspections consumed.
What counts toward tokens and Inspections?
Only evaluated text counts. Prompts and outputs count when evaluated by active policy. Tool calls and tool outputs count if they are captured and evaluated under active policy. Out-of-scope traffic that is not evaluated does not count.
How are Inspections calculated?
Inspections are calculated as: (tokens processed ÷ 1,000) × active evaluations. An active evaluation is any evaluation that actually runs on an interaction based on scope and execution mode.
What is an Inspection (pricing metric)?
An Inspection measures how much AI interaction text is evaluated under active policy. Inspections scale with both the volume of text evaluated (tokens) and the number of Evaluations applied.
What is a token (for pricing)?
A token is a unit of text processed by language models and evaluators. For pricing, “tokens processed” refers to the amount of text from AI interactions that is evaluated under active policy (for example, prompts and outputs, and tool-call content if it is captured and evaluated).
What is an Evaluation (pricing term)?
An Evaluation is a reusable detection module that answers a specific question and returns a structured finding (yes/no, label, or score). Evaluations run under Laws based on scope and execution mode.
What is an Evaluation?
An Evaluation is a self-contained logic module that answers a specific analytical question (e.g., “Is this hate speech?”) using a designated Analytic Engine (e.g., regex, semantic similarity, LLM-based classification). Evaluations may also be referred to as “evaluators” or “detections.” Each Evaluation consists of:
- Configuration: Parameters that define execution settings.
- Logic: The analytical method used to process input data.
- Evaluation Finding: The result of an Evaluation, which could be a binary decision (e.g., “Yes/No”), a confidence score, or another structured output.
What is an Inspection?
An Inspection represents the application of your defined Evaluations (monitors, guardrails or policies) to your data. It is calculated by multiplying the number of Tokens processed by the number of active Evaluations applied during that interaction. (Tokens x Evaluations = Inspections) This approach ensures pricing scales transparently based on both the complexity of your monitoring needs (number of Evalutions) and the volume of data (number of Tokens). Inspection Formula: (Tokens Processed ÷ 1,000) × Active Evaluations = Number of Inspections.
How much does ThirdLaw cost?
ThirdLaw is sold as an annual subscription, with a specific number of monthly Inspections included. Pricing scales based on usage: Inspection occurs when your data (measured in Tokens) is processed by one or more active Evaluations (your defined rules or detectors). Details on how Inspections are calculated and how Evaluations and Tokens are counted can be found in the FAQ "ThirdLaw Pricing Calculations Overview."
Glossary of Concepts
What is Responsible AI?
"Responsible AI involves creating and deploying AI systems that uphold principles of beneficence, non-maleficence, autonomy, justice, and explicability, ensuring that AI operates for the good of society and avoids causing harm." Floridi, L., & Cowls, J. (2019). "A Unified Framework of Five Principles for AI in Society." Harvard Data Science Review
What is AI Safety?
"AI safety addresses the need to design and deploy AI systems that function safely and reliably under all conditions, including scenarios where systems may encounter unexpected or adversarial inputs." Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th Edition):
Your AI. Your Rules.
Take command of your LLM-connected applications and AI agents with tools designed to simplify oversight and enforce your policies.
