2L 2NDLAW epistemic governance for LLMs

Integration

Integrating with 2ndlaw

Integration with 2ndlaw is intentionally simple. You send raw text and optional evidence. 2ndlaw performs a governed inference step inside its own execution environment and returns a structured output. There are no deployable components, no prompts to maintain, and no embedded runtime logic on your side.

Your systems decide when a governed inference call should run. 2ndlaw decides how the inference behaves. Governance is always server-side.

Request Shape

The request schema is fixed and versioned. You provide:

  • task: the natural-language instruction
  • input: user-provided text or data
  • evidence: optional additional documents
  • metadata: optional identifying fields

You populate the schema. 2ndlaw performs the governance. The structure of the request does not change per customer.

The governed step

Each call runs as a single governed inference step. 2ndlaw:

  • applies admissibility and evidence-class rules,
  • determines a sufficiency state,
  • executes a single-pass non-agentic inference,
  • returns one of three structured outputs.

Your orchestrator is responsible only for deciding when a governed inference step should occur. It does not influence, modify, or host the governance logic.

Output

The runtime returns one of three mutually exclusive outputs:

  • answer — the governed result under EVIDENCE_AVAILABLE
  • data-request — a structured request for specific external evidence
  • void — a structured explanation when the task is not answerable

Outputs include the runtime version to support auditing and comparison across model or contract upgrades.

No SDK lock-in

You can call 2ndlaw with:

  • HTTP requests,
  • your own internal tools,
  • or a lightweight wrapper if you prefer.

There are no client-side packages or prompts that require updates. Governance stays on the server, versioned and stable.