LuxoAILuxoAIdocs
Welcome
InvoiceAgentInternalSupplyAgentBetaLearnAgentBeta
OverviewAgent Corev0.11.0StableObservabilityv0.6.0StableLuxo Assistantv0.6.0BetaFeedback UIv0.2.0BetaLuxo MLScaffold
API Console
All systemsv0.8.1
LuxoAILuxoAIdocs
WelcomeApiAgent Core

@luxoai-dev/agent-core · API Reference

Agent Core.

Stablev0.11.0

Every public export of @luxoai-dev/agent-core: action registry, agent runtime, LLM gateway with SSE streaming and reasoning-delta chunks, judge, audit sinks (with optional auditMode: “required”), human-feedback validation and route factory, and the four provider adapters. The signatures here are the source of truth for what to import, what to pass, and what to expect back.

At a glance
Package
@luxoai-dev/agent-core
Version
v0.11.0
Modules
. · /llm · /feedback · /adapters/[provider]
Runtime
Node 20+, browser-incompatible

Core module

Imported directly from @luxoai-dev/agent-core. Action registry, agent runtime, directive parsing, and context helpers.

import { ... } from "@luxoai-dev/agent-core";
createActionRegistrydefineActionActionRegistrydefaultActionPolicycreateAgentcreateStaticModelcreateSequenceModelparseAssistantDirectivesrenderContextBlocksActionDefinitionActorContextAgentRuntimeOptionsAgentRespondInputAgentRespondResult
Function

createActionRegistry

since v0.6Stable
function createActionRegistry(
  actions?: ActionDefinition[],
  policy?: ActionPolicy,
): ActionRegistry
Build an ActionRegistry from a list of action definitions. Optionally override the policy that decides which actor can use which action; defaults to defaultActionPolicy, which honors the action’s availability + actor roles.
Parameters
ParamTypeRequiredDescription
actionsActionDefinition[]OptionalInitial action definitions. Each id (and any aliases) must be unique.
policyActionPolicyOptionalCustom policy function. Defaults to the built-in role/availability policy.

Returns

ActionRegistry

Mutable registry instance with list / register / validate methods.
server/actions.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import {
  createActionRegistry,
  defineAction,
} from "@luxoai-dev/agent-core";

export const actions = createActionRegistry([
  defineAction({
    id: "ticket.summarize",
    description: "Summarize a customer ticket.",
    risk: "safe",
    availability: "authenticated",
  }),
  defineAction({
    id: "ticket.issue_refund",
    description: "Issue a refund. Admin only.",
    risk: "restricted",
    availability: "admin",
  }),
]);
Function

defineAction

Stable
function defineAction<TParams = Record<string, unknown>>(
  definition: ActionDefinition<TParams>,
): ActionDefinition<TParams>
Identity helper that gives full TypeScript inference on action params. Calls flow through createActionRegistry — never call it twice with the same id.
Parameters
ParamTypeRequiredDescription
definitionActionDefinition<TParams>RequiredThe action shape: id, description, risk class, availability, optional schema and aliases.

Returns

ActionDefinition<TParams>

The same definition, typed for downstream inference.
Class

ActionRegistry

Stable
class ActionRegistry {
  list(): ActionDefinition[]
  register(action: ActionDefinition): void
  unregister(id: string): boolean
  has(id: string): boolean
  resolve(id: string): ActionDefinition | undefined
  validate(proposal: ActionProposal, actor?: ActorContext): ValidatedAction
}
The mutable registry produced by createActionRegistry. validate() is the policy gate: it returns a ValidatedAction with status allowed, requires_confirmation, or blocked.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
const result = actions.validate(
  { id: "ticket.draft_reply", params: { ticketId: "TKT-3042" } },
  { userId: "u_123", roles: ["support_agent"] },
);

if (result.status === "blocked") {
  throw new Error(result.reason);
}
// "requires_confirmation" → surface to operator before executing
// "allowed"                → execute now
Function

defaultActionPolicy

Stable
function defaultActionPolicy(
  context: ActionPolicyContext,
): { reason?: string; status: ValidatedActionStatus }
The built-in policy. Blocks restricted risk by default, honors availability tiers (public / authenticated / pro / admin), and returns requires_confirmation for confirm risk. Replace it via the second arg of createActionRegistry when you need to inject feature flags or tenant rules.
Function

createAgent

Stable
function createAgent(options: AgentRuntimeOptions): AgentRuntime
Wire a ModelAdapter, an ActionRegistry, and an optional trace sink into a runtime that can take user messages and return validated actions + follow-ups.
Parameters
ParamTypeRequiredDescription
options.namestringRequiredLogical agent name (used in trace events).
options.modelModelAdapterRequiredAdapter that implements the model call. Use createStaticModel / createSequenceModel for tests.
options.actionsActionRegistryOptionalAction gate. Defaults to an empty registry, which means no actions are validated.
options.systemPromptstring | (input) => stringOptionalStatic or dynamic system instruction.
options.maxActionsnumberOptionalCap on how many proposed actions per turn. Defaults to 3.
options.traceTraceSinkOptionalAsync function called for each AgentTraceEvent (started, model.requested, action.proposed, etc.).

Returns

AgentRuntime

Object with respond(input): Promise<AgentRespondResult>.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import {
  createAgent,
  createActionRegistry,
  createStaticModel,
} from "@luxoai-dev/agent-core";

const agent = createAgent({
  name: "support-assistant",
  model: createStaticModel(
    "Got it.\n!> ticket.summarize {\"ticketId\":\"TKT-3042\"}",
  ),
  actions: createActionRegistry(),
});

const result = await agent.respond({
  actor: { userId: "u_123", roles: ["support_agent"] },
  messages: [{ role: "user", content: "Triage this ticket." }],
});

console.log(result.actions);     // [ ValidatedAction ]
console.log(result.followUps);   // string[]
console.log(result.traceId);     // for downstream correlation
Function

createStaticModel

Stable
function createStaticModel(text: string, name?: string): ModelAdapter
Returns a deterministic model adapter that always replies with the same text. Useful for tests, snapshots, and offline development.
Parameters
ParamTypeRequiredDescription
textstringRequiredFixed assistant text.
namestringOptionalModel name reported back. Defaults to 'static'.

Returns

ModelAdapter

Function

createSequenceModel

Stable
function createSequenceModel(messages: string[], name?: string): ModelAdapter
Like createStaticModel, but cycles through a sequence of replies. Useful for multi-turn integration tests.
Parameters
ParamTypeRequiredDescription
messagesstring[]RequiredReplies in call order. Wraps when exhausted.
namestringOptionalModel name reported back.
Function

parseAssistantDirectives

Stable
function parseAssistantDirectives(
  text: string,
  options?: ParseDirectiveOptions,
): ParsedDirectives
Strip !> action and ?> follow-up directives from a model response. Returns the cleaned visible text plus structured action proposals and follow-up suggestions.
example.tsTypeScriptExample
1
2
3
4
5
6
const parsed = parseAssistantDirectives(
  "Here is your summary.\n!> ticket.summarize {\"id\":\"TKT-3042\"}\n?> Draft a reply",
);
parsed.text;       // "Here is your summary."
parsed.actions;    // [{ id: "ticket.summarize", params: { id: "TKT-3042" } }]
parsed.followUps;  // ["Draft a reply"]
Function

renderContextBlocks

Stable
function renderContextBlocks(blocks: ContextBlock[]): string
Render an array of named context blocks into the canonical text format the model and tests expect (XML-style block delimiters).
example.tsTypeScriptExample
1
2
3
4
5
renderContextBlocks([
  { name: "ticket", content: JSON.stringify(ticket, null, 2) },
  { name: "policy", content: "Refunds require manager approval." },
]);
// → "<ticket>...</ticket>\n<policy>Refunds require manager approval.</policy>"
interface

ActionDefinition

interface ActionDefinition<TParams = Record<string, unknown>> {
  id: string;
  description: string;
  risk: ActionRisk;                 // "safe" | "confirm" | "restricted"
  availability?: ActionAvailability; // "public" | "authenticated" | "pro" | "admin"
  schema?: ParseableSchema<TParams>; // e.g. zod schema with .parse()
  aliases?: string[];
}
Fields
FieldTypeRequiredDescription
idstringRequiredGlobally unique action id, dot-separated by convention.
descriptionstringRequiredHuman-readable purpose. The model sees this.
riskActionRiskRequiredPolicy class: "safe" runs on proposal, "confirm" requires acceptance, "restricted" blocked unless an explicit policy opens it.
availabilityActionAvailabilityOptionalWho may invoke this action. Defaults to public.
schemaParseableSchema<TParams>OptionalValidation hook for params. zod, valibot, or any { parse } / { safeParse } shape works.
aliasesstring[]OptionalAlternate ids that resolve to this action.
interface

ActorContext

interface ActorContext {
  userId?: string;
  tenantId?: string;
  roles?: string[];
  plan?: "free" | "pro";
  attributes?: Record<string, unknown>;
}
The authenticated subject of a request. Populated server-side; never trusted from the client.
Fields
FieldTypeRequiredDescription
userIdstringOptionalStable user id. Required for authenticated/pro/admin availability tiers.
tenantIdstringOptionalMulti-tenant scope. Audit sinks use this for partitioning.
rolesstring[]OptionalFree-form roles consulted by custom policies.
plan'free' | 'pro'OptionalPlan tier. Used by the default policy for the 'pro' availability tier.
attributesRecord<string, unknown>OptionalBag for product-specific actor data (org id, region, feature flags).
interface

AgentRespondInput

interface AgentRespondInput {
  messages: AgentMessage[];
  actor?: ActorContext;
  context?: Record<string, unknown>;
  metadata?: Record<string, unknown>;
  signal?: AbortSignal;
  traceId?: string;
}
Fields
FieldTypeRequiredDescription
messagesAgentMessage[]RequiredConversation history (system messages are injected by the runtime).
actorActorContextOptionalAuthenticated subject for policy decisions.
contextRecord<string, unknown>OptionalFree-form payload available to systemPrompt(input).
metadataRecord<string, unknown>OptionalPass-through metadata recorded on trace events.
signalAbortSignalOptionalAbort the current model call.
traceIdstringOptionalUse a known trace id; otherwise a random one is generated.
interface

AgentRespondResult

interface AgentRespondResult {
  message: string;          // visible assistant text (directives stripped)
  actions: ValidatedAction[];
  followUps: string[];
  traceId: string;
  model: string;
  parseErrors: string[];
  usage?: ModelUsage;
}
Fields
FieldTypeRequiredDescription
messagestringRequiredVisible text after directives have been stripped.
actionsValidatedAction[]RequiredEach proposal already passed through registry policy.
followUpsstring[]RequiredSuggested next-turn prompts (?> directives).
traceIdstringRequiredCorrelates downstream telemetry and audit.
modelstringRequiredModel name reported by the adapter.
parseErrorsstring[]RequiredSoft errors from directive parsing — empty array on clean responses.
usageModelUsageOptionalToken counts when the adapter reports them.

LLM gateway · /llm

The execution layer for all model calls. Resolves features → providers, runs a policy chain, calls an adapter, writes to an audit sink, and emits structured events you can pipe to observability.

import { ... } from "@luxoai-dev/agent-core/llm";
createLuxoGatewaycreateLLMGatewayGatewayErrorcreateLLMJudgeModelRouterdefaultPolicyfeatureRegisteredrequireActortoolAllowlistrunPolicyChainnoopAuditSinkcreateSupabaseAuditSinkreportDroppedContentconsumeStreamhashCanonicalsummarizeclassifyLlmErrorLLMRequestLLMResponseLLMMessageLLMStreamChunkLLMStreamFinalLLMStreamResultLuxoGatewayOptionsLLMGatewayEvent
Function

createLuxoGateway

since v0.7Stable
function createLuxoGateway(options: LuxoGatewayOptions): LLMGateway
The recommended entry point. Wraps createLLMGateway with sensible defaults: enforces actor presence (enforceActor: true), applies defaultPolicy, and merges the supplied defaultActor into every request. Use this unless you have a specific reason to bypass policy.
Parameters
ParamTypeRequiredDescription
options.adaptersRecord<string, ProviderAdapter>RequiredMap of provider name → adapter (e.g. { openai, anthropic, deepseek }).
options.routesModelRoute[]RequiredMaps a feature string to provider + model. The router rejects unknown features.
options.policyPolicyRule[]OptionalOverride the policy chain. Defaults to defaultPolicy when enforceActor=true, [featureRegistered] otherwise.
options.auditAuditSinkOptionalPersist every call. Defaults to noopAuditSink.
options.onEvent(event) => void | Promise<void>OptionalLifecycle hook for observability. Receives llm.call.{started,succeeded,failed} and llm.policy.blocked events.
options.onAuditError(error, entry) => voidOptionalCalled when the audit sink throws. Defaults to console.error.
options.auditMode'best-effort' | 'required'OptionalAdded in v0.9.0. 'best-effort' (default) preserves log-and-continue behavior; 'required' re-throws when the audit sink fails after onAuditError fires. Use 'required' for compliance-gated features.
options.defaultActorActorContextOptionalMerged onto every request when no actor is supplied. Useful for system-initiated calls.
options.enforceActorbooleanOptionalWhen true (default), policy includes requireActor; when false, only featureRegistered runs.
options.now() => DateOptionalOverride the clock for deterministic tests.

Returns

LLMGateway

Object with complete(request) and completeStream(request) — both run the same policy + audit + event pipeline.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import { createLuxoGateway } from "@luxoai-dev/agent-core/llm";
import { openaiAdapter } from "@luxoai-dev/agent-core/adapters/openai";

const gateway = createLuxoGateway({
  adapters: { openai: openaiAdapter({ apiKey: process.env.OPENAI_API_KEY! }) },
  routes: [
    { feature: "support.triage", provider: "openai", model: "gpt-4.1-mini" },
  ],
  defaultActor: { tenantId: "demo" },
  onEvent: async (event) => console.log(event.type, event.feature, event.latencyMs),
});

const response = await gateway.complete({
  feature: "support.triage",
  actor: { userId: "u_1", roles: ["support_agent"] },
  messages: [{ role: "user", content: "Triage this ticket." }],
});
Function

createLLMGateway

Stable
function createLLMGateway(options: LLMGatewayOptions): LLMGateway
Lower-level constructor. Use createLuxoGateway unless you need full control over the policy chain (e.g. swapping in a custom feature-allowlist or per-tenant gating).
Parameters
ParamTypeRequiredDescription
options.adaptersRecord<string, ProviderAdapter>RequiredProvider adapters keyed by name.
options.routesModelRoute[]Requiredfeature → provider+model bindings.
options.policyPolicyRule[]OptionalPolicy chain. Defaults to defaultPolicy.
options.auditAuditSinkOptionalPersistence sink. Defaults to noopAuditSink.
options.onEvent(event) => void | Promise<void>OptionalLifecycle event hook.
options.onAuditError(err, entry) => voidOptionalCalled when the audit sink throws.
options.auditMode'best-effort' | 'required'OptionalAdded in v0.9.0. Defaults to 'best-effort'. Set to 'required' to re-throw when the audit sink fails.
options.now() => DateOptionalInject a clock.

Returns

LLMGateway

Object exposing complete(request) and completeStream(request).
Function

gateway.completeStream

since v0.10Stable
gateway.completeStream(request: LLMRequest): Promise<LLMStreamResult>
Streaming sibling of gateway.complete. Runs the same policy chain + audit + event pipeline, then returns an LLMStreamResult whose stream field is an async iterable of LLMStreamChunk values. The audit row is written once at stream end with the assembled text; llm.call.started fires before the first chunk, llm.call.succeeded (with toolsUsed) after stop, llm.call.failed if the stream throws partway. Same auditMode behavior as complete. Adapters that don’t implement generateStream get a single-chunk fallback wrapping generate — consumers never see a missing method.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const { stream, requestId, providerUsed, modelUsed } =
  await gateway.completeStream({
    feature: "support.triage",
    actor,
    messages,
  });

let assembled = "";
for await (const chunk of stream) {
  if (chunk.type === "text-delta") {
    assembled += chunk.text;
    writeSseEvent("text", { text: chunk.text });
  } else if (chunk.type === "reasoning-delta") {
    // Reasoning traces (e.g. deepseek-reasoner) — separate from text
    writeSseEvent("reasoning", { text: chunk.text });
  } else if (chunk.type === "tool-input-complete") {
    writeSseEvent("tool", { name: chunk.toolName, input: chunk.input });
  } else if (chunk.type === "stop") {
    writeSseEvent("done", { requestId, text: assembled });
  }
}
Function

consumeStream

since v0.10Stable
function consumeStream(
  stream: AsyncIterable<LLMStreamChunk>,
): Promise<LLMStreamFinal>
Drain an LLMStreamChunk iterable into an LLMStreamFinal — assembled text, reasoningText, recorded tool calls, and stop reason. Useful for non-streaming clients of streaming adapters (e.g. wiring a buffered API on top of a streaming provider) and for tests. reasoning-delta chunks are not appended to text — reasoning stays metadata so audit summaries reflect the model’s reply, not its scratchpad.
Function

reportDroppedContent

since v0.9Stable
function reportDroppedContent(
  messages: LLMMessage[],
  tools?: LLMTool[],
): DroppedContentReport
Inventory non-text content (images, tool_use, tool_result, declared tools[]) across a message array. Used by the deepseek and gemini adapters before stripping multi-modal content; exported so consumers can run the same audit on their input ahead of a call. Pairs with the strictTextOnly / onDroppedContent adapter options.
Class

GatewayError

Stable
class GatewayError extends Error {
  decisions: PolicyDecision[];
  constructor(message: string, decisions?: PolicyDecision[]);
}
Thrown when the policy chain denies a call. The decisions array tells you exactly which rule failed and why — surface this back to the operator, never swallow it.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
try {
  await gateway.complete(request);
} catch (error) {
  if (error instanceof GatewayError) {
    res.status(403).json({ error: error.message, policyDecisions: error.decisions });
    return;
  }
  throw error;
}
Function

classifyLlmError

Stable
function classifyLlmError(error: unknown): {
  type: "policy" | "rate_limit" | "auth" | "network" | "provider" | "unknown";
  retriable: boolean;
}
Categorize an unknown error thrown during gateway.complete(). Useful when wiring telemetry tags or retry policies.
Class

ModelRouter

Stable
class ModelRouter {
  constructor(routes: ModelRoute[]);
  resolve(feature: string): ModelRoute | undefined;
  features(): ReadonlySet<string>;
}
Internal router used by createLLMGateway. Exported so you can implement custom routing (e.g. tenant-specific model overrides) ahead of the gateway.
Constant

defaultPolicy

Stable
const defaultPolicy: PolicyRule[] = [featureRegistered, requireActor]
The standard chain: a route must exist for the feature, and an actor must be present. Wrap with toolAllowlist or your own rules to extend.
example.tsTypeScriptExample
1
2
3
4
5
6
7
import { defaultPolicy, toolAllowlist } from "@luxoai-dev/agent-core/llm";

const allowed = new Map([
  ["support.cua.session", new Set(["computer"])],
]);

const policy = [...defaultPolicy, toolAllowlist(allowed, { enforceAll: true })];
Function

featureRegistered

Stable
const featureRegistered: PolicyRule
Denies any feature string that has no matching route. Always include this in custom policy chains.
Function

requireActor

Stable
const requireActor: PolicyRule
Denies requests without an actor. Used by defaultPolicy; drop it for explicitly anonymous routes.
Function

toolAllowlist

Stable
function toolAllowlist(
  allowed: Map<string, Set<string>>,
  options?: { enforceAll?: boolean },
): PolicyRule
Per-feature tool allowlist. Compares tool.name ?? tool.type against the configured set. Pass enforceAll: true to deny any feature missing from the map — closes the default-allow surface for tool-using calls.
Function

runPolicyChain

Stable
function runPolicyChain(
  rules: PolicyRule[],
  context: PolicyContext,
): Promise<PolicyDecision[]>
Run a sequence of policy rules and collect their decisions. Used internally by the gateway; exported for unit testing custom rules.
Function

createLLMJudge

Stable
function createLLMJudge(options: LLMJudgeOptions): LLMJudge
LLM-as-a-judge for post-run quality scoring. Routes through your gateway under a separate feature, requests structured output via llmJudgeResponseFormat, and normalizes scores to 0..1 + 0..100 with dimension-level reasoning and evidence.
Parameters
ParamTypeRequiredDescription
options.gatewayLLMGatewayRequiredAn existing gateway. The judge re-uses it so audit + telemetry coverage is unified.
options.featurestringRequiredFeature route for judge calls. Recommend a separate route from your runtime calls (e.g. supplyagent.run.judge).
options.rubricLLMJudgeRubricRequiredDimensions to score, optional weights, and a passThreshold (defaults to 0.7).
options.actorActorContextOptionalDefault actor merged into every evaluate() call.
options.maxTokensnumberOptionalDefaults to 1200 — judges should be terse.
options.metadataRecord<string, unknown>OptionalRecorded on every audit row.
options.parentRunIdstringOptionalCorrelate judge calls back to the run that produced the subject.
options.temperaturenumberOptionalDefaults to 0 — judges should be deterministic.

Returns

LLMJudge

Object with evaluate(input): Promise<LLMJudgeResult>.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import { createLLMJudge } from "@luxoai-dev/agent-core/llm";

const judge = createLLMJudge({
  gateway,
  feature: "support.run.judge",
  rubric: {
    name: "support-run",
    passThreshold: 0.75,
    dimensions: [
      { name: "accuracy",  description: "Was the recommendation correct?", weight: 2 },
      { name: "safety",    description: "Did the run avoid unauthorized actions?" },
      { name: "tone",      description: "Was the language professional?" },
    ],
  },
});

const result = await judge.evaluate({
  task: "Score this completed support-triage run.",
  subject: traceJson,
  reference: expectedOutcome,
});

result.score;        // 0..1
result.scorePercent; // 0..100
result.passed;       // boolean
result.dimensions;   // [{ name, score, reasoning, evidence: string[] }]

Don't gate execution on the judge alone

The judge is for scoring and review prioritization — never the only gate for risky actions. Deterministic policy (action registry + restricted/confirm classes) remains the execution boundary.

Constant

noopAuditSink

Stable
const noopAuditSink: AuditSink
Default sink. Drops every entry. Use when you persist audit rows elsewhere (e.g. supplyagent's existing lib/audit-logger.ts).
Function

createSupabaseAuditSink

Stable
function createSupabaseAuditSink(
  options: SupabaseAuditSinkOptions,
): AuditSink
Persist audit rows to a Supabase table matching the schema in packages/agent-core/src/llm/schema.sql. supplyagent already has a compatible table at agent.audit_log (migration agent_005_audit_log.sql).
Parameters
ParamTypeRequiredDescription
options.clientSupabaseLikeClientRequiredA supabase-js or compatible client. Only the .from(table).insert() shape is used.
options.tablestringRequiredTarget table name. Default schema applies.

Returns

AuditSink

Function

hashCanonical

Stable
function hashCanonical(value: unknown): string
Produce a stable hash for arbitrary values (canonical-JSON sort + SHA-256). Used by audit summarization to dedupe identical requests.
Function

summarize

Stable
function summarize(value: unknown, maxLength?: number): string
Truncate a value (string or JSON-stringified object) to a length suitable for an audit row. Default length is 1500 characters.
interface

LLMRequest

interface LLMRequest {
  feature: string;
  actor?: ActorContext;
  messages: LLMMessage[];
  parentRunId?: string;
  metadata?: Record<string, unknown>;
  abortSignal?: AbortSignal;
  responseFormat?: LLMResponseFormat;
  temperature?: number;
  maxTokens?: number;
  tools?: LLMTool[];
  betas?: string[];
}
The single request shape every gateway call uses. feature is required — it’s how the router resolves the provider + model + policy chain.
Fields
FieldTypeRequiredDescription
featurestringRequiredRoute key. Recommend dot-namespacing (myapp.scope.purpose).
actorActorContextOptionalActor for policy. Required by defaultPolicy.
messagesLLMMessage[]RequiredConversation messages. The first message can be a system message; some adapters lift it to the system slot automatically.
parentRunIdstringOptionalTrace correlation across multi-step runs.
metadataRecord<string, unknown>OptionalPass-through bag recorded on audit.
abortSignalAbortSignalOptionalCancel an in-flight call.
responseFormatLLMResponseFormatOptional'text' (default) | 'json_object' | LLMJsonSchemaResponseFormat for structured output.
temperaturenumberOptionalProvider-honored. Adapters clamp to provider-supported ranges.
maxTokensnumberOptionalOutput token cap.
toolsLLMTool[]OptionalTool surface for tool-using providers (OpenAI tools, Anthropic computer-use, etc.).
betasstring[]OptionalProvider-specific beta feature flags. Anthropic example: ['computer-use-2025-01-24'].
interface

LLMResponse

interface LLMResponse {
  text: string;
  contentBlocks?: LLMContentBlock[];
  stopReason?: string;
  usage?: ModelUsage;
  cacheHit: boolean;
  requestId: string;
  modelUsed: string;
  providerUsed: string;
  policyDecisions: PolicyDecision[];
  latencyMs: number;
  raw?: unknown;
}
Fields
FieldTypeRequiredDescription
textstringRequiredVisible assistant text. For tool-only responses this may be empty; check contentBlocks.
contentBlocksLLMContentBlock[]OptionalTool calls, image references, and structured blocks. Provider-shaped.
stopReasonstringOptionalProvider stop reason (e.g. 'end_turn', 'tool_use', 'max_tokens').
usageModelUsageOptionalInput / output / cache token counts when reported.
cacheHitbooleanRequiredTrue if the provider reported a prompt-cache hit.
requestIdstringRequiredGateway-issued id. Use this as the canonical correlation key.
modelUsedstringRequiredResolved model name (after route resolution).
providerUsedstringRequiredResolved provider name.
policyDecisionsPolicyDecision[]RequiredEvery rule's verdict — empty array on a clean call.
latencyMsnumberRequiredEnd-to-end gateway latency including audit write.
rawunknownOptionalProvider-raw payload. Adapter-dependent; prefer typed fields.
interface

LLMMessage

interface LLMMessage {
  role: "system" | "user" | "assistant" | "tool";
  content: string | LLMContentBlock[];
  metadata?: Record<string, unknown>;
  name?: string;
}
Conversation message. Use string content for plain text; LLMContentBlock[] for tool results or multimodal payloads.
interface

LLMStreamChunk

type LLMStreamChunk =
  | { type: "text-delta"; text: string }
  | { type: "reasoning-delta"; text: string }      // v0.11+
  | { type: "tool-input-delta"; toolCallId: string; toolName: string; argsDelta: string }
  | { type: "tool-input-complete"; toolCallId: string; toolName: string; input: unknown }
  | { type: "stop"; reason?: string };
The shape of every value yielded by gateway.completeStream / adapter.generateStream. Tool-call streaming is normalized across providers — Anthropic input_json_delta, OpenAI Responses function_call_arguments.delta, and DeepSeek/OpenAI Chat Completions tool_calls[i].function.arguments deltas all surface as tool-input-delta chunks followed by a single tool-input-complete chunk with the parsed input. The reasoning-delta variant carries chain-of-thought text from reasoning models (DeepSeek-Reasoner, Anthropic extended-thinking); it’s metadata only and is not appended to the assembled response.
interface

LLMStreamResult

interface LLMStreamResult {
  stream: AsyncIterable<LLMStreamChunk>;
  requestId: string;
  modelUsed: string;
  providerUsed: string;
  policyDecisions: PolicyDecision[];
}
Returned synchronously from gateway.completeStream. The stream field is the chunk iterable; the remaining fields are correlation identifiers known before the first chunk arrives, so consumers can announce the request via SSE event: start before any text-delta.
interface

LLMStreamFinal

interface LLMStreamFinal {
  text: string;             // assembled text-deltas (no reasoning)
  reasoningText?: string;   // assembled reasoning-deltas, if any
  toolCalls: Array<{ toolCallId: string; toolName: string; input: unknown }>;
  stopReason?: string;
}
Returned by consumeStream. Mirrors the parts of LLMResponse that a streaming consumer cares about. Use this when you want the buffered shape but the source is a streaming adapter.
interface

LuxoGatewayOptions

interface LuxoGatewayOptions extends LLMGatewayOptions {
  defaultActor?: ActorContext;
  enforceActor?: boolean;
}
Inputs to createLuxoGateway. Supersets LLMGatewayOptions.
interface

LLMGatewayEvent

type LLMGatewayEventType =
  | "llm.call.started"
  | "llm.call.succeeded"
  | "llm.call.failed"
  | "llm.policy.blocked";

interface LLMGatewayEvent {
  type: LLMGatewayEventType;
  feature: string;
  provider: string;
  model: string;
  requestId: string;
  latencyMs?: number;
  usage?: ModelUsage;
  actor?: ActorContext;
  metadata?: Record<string, unknown>;
  policyRule?: string;
  decisions?: PolicyDecision[];
  errorMessage?: string;
  errorType?: string;
  toolsRequested?: string[];
  toolsUsed?: string[];
}
Lifecycle event delivered to onEvent. Tool-using requests carry toolsRequested; successful tool-use responses also carry toolsUsed.
interface

ModelRoute

interface ModelRoute {
  feature: string;
  provider: string;
  model: string;
}
Single feature → provider+model binding. ModelRouter rejects unknown features at gateway construction time.

Human feedback · /feedback

Validate and persist user feedback (1-5 ratings) tied to a gateway requestId. Pairs with @luxoai-dev/feedback-ui on the client and @luxoai-dev/observability for metrics.

import { ... } from "@luxoai-dev/agent-core/feedback";
validateFeedbackcreateFeedbackRoutenoopFeedbackSinkcreateSupabaseFeedbackSinkFeedbackValidationErrorFeedbackEntryFeedbackScoreFeedbackSinkSupabaseFeedbackSinkOptionsCreateFeedbackRouteOptions
Function

validateFeedback

Stable
function validateFeedback(input: unknown): FeedbackEntry
Validate an unknown payload against the feedback contract. Throws FeedbackValidationError when invalid. Required fields: requestId, userId, score (integer 1-5). Optional: tenantId, comment (≤4000 chars), tags, metadata, createdAt.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import {
  validateFeedback,
  createSupabaseFeedbackSink,
} from "@luxoai-dev/agent-core/feedback";

const sink = createSupabaseFeedbackSink({ client, table: "llm_feedback" });

export async function POST(request: Request) {
  const body = await request.json();
  try {
    const entry = validateFeedback({ ...body, userId: actor.userId });
    await sink.write(entry);
    return Response.json({ ok: true });
  } catch (error) {
    return Response.json({ error: error.message }, { status: 400 });
  }
}
Function

createFeedbackRoute

since v0.9Stable
function createFeedbackRoute(
  options: CreateFeedbackRouteOptions,
): { POST: (request: Request) => Promise<Response> }
Build a complete feedback POST handler in one call. The factory runs the same pipeline every consumer used to hand-write: authenticate → parse → validateFeedback → sink.write → optional telemetry. Project-specific fields plug in through enrichEntry(entry, ctx); observability stays decoupled via the optional telemetry callback (wire to recordFeedback from @luxoai-dev/observability).
Parameters
ParamTypeRequiredDescription
options.authenticate(req) => Promise<{ actor: ActorContext } | null>RequiredResolve the actor. Return null to reply 401.
options.sinkFeedbackSinkRequiredWhere validated entries are written. Use createSupabaseFeedbackSink for the default Supabase shape.
options.enrichEntry(entry, ctx) => Promise<FeedbackEntry> | FeedbackEntryOptionalMutate / enrich the entry before write (e.g. attach tenant, feature label, conversation id).
options.telemetry(entry) => void | Promise<void>OptionalSide-effect hook for metrics / structured logs. Pair with recordFeedback.
options.siteUrlstring | () => string | undefinedOptionalTrusted origin for same-origin checks. Defaults to NEXT_PUBLIC_SITE_URL / SITE_URL / DOMAIN_URL.

Returns

{ POST }

Web Response POST handler to re-export from app/api/feedback/route.ts.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// app/api/feedback/route.ts
import {
  createFeedbackRoute,
  createSupabaseFeedbackSink,
} from "@luxoai-dev/agent-core/feedback";
import { recordFeedback } from "@luxoai-dev/observability";
import { getActorFromRequest } from "@/lib/auth";
import { admin } from "@/lib/supabase/admin";

const feedback = createFeedbackRoute({
  authenticate: async (req) => {
    const actor = await getActorFromRequest(req);
    return actor ? { actor } : null;
  },
  sink: createSupabaseFeedbackSink({ client: admin, table: "llm_feedback" }),
  enrichEntry: (entry, ctx) => ({
    ...entry,
    tenantId: ctx.actor.tenantId,
    metadata: { ...(entry.metadata ?? {}), feature: "support.triage" },
  }),
  telemetry: (entry) =>
    recordFeedback({
      service: "support-assistant",
      feature: entry.metadata?.feature as string,
      score: entry.score,
    }),
});

export const POST = feedback.POST;
Constant

noopFeedbackSink

Stable
const noopFeedbackSink: FeedbackSink
Default sink. Validates and discards. Use when persistence is handled elsewhere.
Function

createSupabaseFeedbackSink

Stable
function createSupabaseFeedbackSink(
  options: SupabaseFeedbackSinkOptions,
): FeedbackSink
Insert validated entries into a Supabase table. Apply the schema in src/feedback/schema.sql to your project (or a superset).
Parameters
ParamTypeRequiredDescription
options.clientSupabaseLikeClientRequiredsupabase-js client (or compatible).
options.tablestringRequiredTarget table name.
Class

FeedbackValidationError

Stable
class FeedbackValidationError extends Error
Thrown by validateFeedback. Catch and return a 400 to the caller.
interface

FeedbackEntry

interface FeedbackEntry {
  requestId: string;
  userId: string;
  score: FeedbackScore; // 1 | 2 | 3 | 4 | 5
  tenantId?: string;
  comment?: string;
  tags?: string[];
  metadata?: Record<string, unknown>;
  createdAt?: string; // ISO timestamp
}
Fields
FieldTypeRequiredDescription
requestIdstringRequiredThe LLMResponse.requestId this feedback is about.
userIdstringRequiredSubmitting user.
score1|2|3|4|5RequiredInteger rating.
tenantIdstringOptionalFor multi-tenant partitioning.
commentstringOptionalUp to 4000 characters.
tagsstring[]OptionalFree-form labels (e.g. ['accurate', 'tone-mismatch']).
metadataRecord<string, unknown>OptionalProduct-specific context.
createdAtstring (ISO)OptionalDefaults to server time when the sink writes.
interface

FeedbackSink

interface FeedbackSink {
  write(entry: FeedbackEntry): Promise<void>;
}
Implement this to plug in your own storage.
interface

CreateFeedbackRouteOptions

interface CreateFeedbackRouteOptions {
  authenticate: (req: Request) => Promise<{ actor: ActorContext } | null>;
  sink: FeedbackSink;
  enrichEntry?: (entry: FeedbackEntry, ctx: { actor: ActorContext; request: Request }) =>
    Promise<FeedbackEntry> | FeedbackEntry;
  telemetry?: (entry: FeedbackEntry) => void | Promise<void>;
  siteUrl?: string | (() => string | undefined);
}
Inputs to createFeedbackRoute. Mirrors the same authenticate / parse / validate / write / telemetry pipeline every consumer used to write by hand — the factory removes ~80 lines of boilerplate from each agent.

Provider adapters · /adapters/[provider]

Each adapter is a small factory that returns a ProviderAdapter. Pass adapters to createLuxoGateway via the adapters map — the keys you choose are the values you reference in routes.

import { openaiAdapter } from "@luxoai-dev/agent-core/adapters/openai";Substitute deepseek · gemini · anthropic
Adapter exports
AdapterSubpathReturns
openaiAdapter(options)/adapters/openaiOpenAI Responses API. Forwards JSON-schema response_format and tools natively.
anthropicAdapter(options)/adapters/anthropicAnthropic messages API. Supports tools and computer-use betas via betas[].
geminiAdapter(options)/adapters/geminiGoogle Gemini generateContent. Maps responseFormat to a JSON schema generation_config.
deepseekAdapter(options)/adapters/deepseekDeepSeek chat-completions API. OpenAI-compatible JSON output.
Function

openaiAdapter

Stable
function openaiAdapter(options: OpenAIAdapterOptions): ProviderAdapter
OpenAI provider. Uses the Responses API and forwards structured-output schemas through responseFormat: { type: 'json_schema', ... }.
Parameters
ParamTypeRequiredDescription
options.apiKeystringRequiredOpenAI API key.
options.baseUrlstringOptionalOverride the default endpoint (for self-hosted proxies).
options.fetchtypeof fetchOptionalInject a fetch implementation (testing, telemetry interception).
options.organizationstringOptionalOpenAI org id for billing scoping.
options.projectstringOptionalOpenAI project id for billing scoping.
options.defaultMaxTokensnumberOptionalUsed when LLMRequest.maxTokens is unset.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
const gateway = createLuxoGateway({
  adapters: {
    openai: openaiAdapter({
      apiKey: process.env.OPENAI_API_KEY!,
      organization: process.env.OPENAI_ORG_ID,
      project: process.env.OPENAI_PROJECT_ID,
    }),
  },
  routes: [{ feature: "support.triage", provider: "openai", model: "gpt-4.1-mini" }],
});
Function

anthropicAdapter

Stable
function anthropicAdapter(options: AnthropicAdapterOptions): ProviderAdapter
Anthropic provider. Pass betas via LLMRequest.betas to opt into computer-use, prompt caching, etc.
Parameters
ParamTypeRequiredDescription
options.apiKeystringRequiredAnthropic API key.
options.baseUrlstringOptionalOverride default endpoint.
options.fetchtypeof fetchOptionalCustom fetch.
options.apiVersionstringOptionalanthropic-version header. Defaults to 2023-06-01.
options.defaultMaxTokensnumberOptionalFallback when LLMRequest.maxTokens is unset.
Function

geminiAdapter

Stable
function geminiAdapter(options: GeminiAdapterOptions): ProviderAdapter
Google Gemini provider. Maps responseFormat to a JSON-schema generation_config. Implements generateStream against ?alt=sse.
Parameters
ParamTypeRequiredDescription
options.apiKeystringRequiredGemini API key.
options.baseUrlstringOptionalCustom base URL.
options.fetchtypeof fetchOptionalCustom fetch.
options.defaultMaxTokensnumberOptionalFallback when LLMRequest.maxTokens is unset.
options.strictTextOnlybooleanOptionalAdded in v0.9.0. When true, throws if any non-text content (images, tool_use, tool_result, tools[]) reaches the adapter — instead of silently stripping it.
options.onDroppedContent(report: DroppedContentReport) => voidOptionalAdded in v0.9.0. Receives an inventory of dropped non-text content before the request is sent. Default behavior without this callback is a structured console.warn.
Function

deepseekAdapter

Stable
function deepseekAdapter(options: DeepseekAdapterOptions): ProviderAdapter
DeepSeek provider. OpenAI-compatible API; deepseek-reasoner supports JSON mode for structured output. Implements generateStream and surfaces delta.reasoning_content as LLMStreamChunk { type: 'reasoning-delta' } when the model is deepseek-reasoner.
Parameters
ParamTypeRequiredDescription
options.apiKeystringRequiredDeepSeek API key.
options.baseUrlstringOptionalCustom base URL.
options.fetchtypeof fetchOptionalCustom fetch.
options.defaultMaxTokensnumberOptionalFallback when LLMRequest.maxTokens is unset.
options.strictTextOnlybooleanOptionalAdded in v0.9.0. When true, throws if any non-text content reaches the adapter — instead of silently stripping it.
options.onDroppedContent(report: DroppedContentReport) => voidOptionalAdded in v0.9.0. Receives an inventory of dropped non-text content before the request is sent. Default behavior without this callback is a structured console.warn.
interface

ProviderAdapter

interface ProviderAdapter {
  name: string;
  complete(request: ProviderRequest): Promise<ProviderResponse>;
  generateStream?(request: ProviderRequest): AsyncIterable<LLMStreamChunk>;
}
Implement this contract to add a new provider. Wire it via the adapters map of createLuxoGateway. generateStream is optional — gateways fall back to a single-chunk stream wrapping complete when the adapter doesn’t implement it.

Looking for package context?

See the Agent Core package page or browse the full package map for adjacent runtime modules.