Every public export of @luxoai-dev/agent-core: action registry, agent runtime, LLM gateway with SSE streaming and reasoning-delta chunks, judge, audit sinks (with optional auditMode: “required”), human-feedback validation and route factory, and the four provider adapters. The signatures here are the source of truth for what to import, what to pass, and what to expect back.
At a glance
Package
@luxoai-dev/agent-core
Version
v0.11.0
Modules
. · /llm · /feedback · /adapters/[provider]
Runtime
Node 20+, browser-incompatible
Core module
Imported directly from @luxoai-dev/agent-core. Action registry, agent runtime, directive parsing, and context helpers.
function createActionRegistry(
actions?: ActionDefinition[],
policy?: ActionPolicy,
): ActionRegistry
Build an ActionRegistry from a list of action definitions. Optionally override the policy that decides which actor can use which action; defaults to defaultActionPolicy, which honors the action’s availability + actor roles.
Parameters
Param
Type
Required
Description
actions
ActionDefinition[]
Optional
Initial action definitions. Each id (and any aliases) must be unique.
policy
ActionPolicy
Optional
Custom policy function. Defaults to the built-in role/availability policy.
Returns
ActionRegistry
Mutable registry instance with list / register / validate methods.
The mutable registry produced by createActionRegistry. validate() is the policy gate: it returns a ValidatedAction with status allowed, requires_confirmation, or blocked.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
const result = actions.validate( { id: "ticket.draft_reply", params: { ticketId: "TKT-3042" } }, { userId: "u_123", roles: ["support_agent"] },);if (result.status === "blocked") { throw new Error(result.reason);}// "requires_confirmation" → surface to operator before executing// "allowed" → execute now
The built-in policy. Blocks restricted risk by default, honors availability tiers (public / authenticated / pro / admin), and returns requires_confirmation for confirm risk. Replace it via the second arg of createActionRegistry when you need to inject feature flags or tenant rules.
function createAgent(options: AgentRuntimeOptions): AgentRuntime
Wire a ModelAdapter, an ActionRegistry, and an optional trace sink into a runtime that can take user messages and return validated actions + follow-ups.
Parameters
Param
Type
Required
Description
options.name
string
Required
Logical agent name (used in trace events).
options.model
ModelAdapter
Required
Adapter that implements the model call. Use createStaticModel / createSequenceModel for tests.
options.actions
ActionRegistry
Optional
Action gate. Defaults to an empty registry, which means no actions are validated.
options.systemPrompt
string | (input) => string
Optional
Static or dynamic system instruction.
options.maxActions
number
Optional
Cap on how many proposed actions per turn. Defaults to 3.
options.trace
TraceSink
Optional
Async function called for each AgentTraceEvent (started, model.requested, action.proposed, etc.).
Returns
AgentRuntime
Object with respond(input): Promise<AgentRespondResult>.
function parseAssistantDirectives(
text: string,
options?: ParseDirectiveOptions,
): ParsedDirectives
Strip !> action and ?> follow-up directives from a model response. Returns the cleaned visible text plus structured action proposals and follow-up suggestions.
example.tsTypeScriptExample
1
2
3
4
5
6
const parsed = parseAssistantDirectives( "Here is your summary.\n!> ticket.summarize {\"id\":\"TKT-3042\"}\n?> Draft a reply",);parsed.text; // "Here is your summary."parsed.actions; // [{ id: "ticket.summarize", params: { id: "TKT-3042" } }]parsed.followUps; // ["Draft a reply"]
Each proposal already passed through registry policy.
followUps
string[]
Required
Suggested next-turn prompts (?> directives).
traceId
string
Required
Correlates downstream telemetry and audit.
model
string
Required
Model name reported by the adapter.
parseErrors
string[]
Required
Soft errors from directive parsing — empty array on clean responses.
usage
ModelUsage
Optional
Token counts when the adapter reports them.
LLM gateway · /llm
The execution layer for all model calls. Resolves features → providers, runs a policy chain, calls an adapter, writes to an audit sink, and emits structured events you can pipe to observability.
function createLuxoGateway(options: LuxoGatewayOptions): LLMGateway
The recommended entry point. Wraps createLLMGateway with sensible defaults: enforces actor presence (enforceActor: true), applies defaultPolicy, and merges the supplied defaultActor into every request. Use this unless you have a specific reason to bypass policy.
Parameters
Param
Type
Required
Description
options.adapters
Record<string, ProviderAdapter>
Required
Map of provider name → adapter (e.g. { openai, anthropic, deepseek }).
options.routes
ModelRoute[]
Required
Maps a feature string to provider + model. The router rejects unknown features.
options.policy
PolicyRule[]
Optional
Override the policy chain. Defaults to defaultPolicy when enforceActor=true, [featureRegistered] otherwise.
options.audit
AuditSink
Optional
Persist every call. Defaults to noopAuditSink.
options.onEvent
(event) => void | Promise<void>
Optional
Lifecycle hook for observability. Receives llm.call.{started,succeeded,failed} and llm.policy.blocked events.
options.onAuditError
(error, entry) => void
Optional
Called when the audit sink throws. Defaults to console.error.
options.auditMode
'best-effort' | 'required'
Optional
Added in v0.9.0. 'best-effort' (default) preserves log-and-continue behavior; 'required' re-throws when the audit sink fails after onAuditError fires. Use 'required' for compliance-gated features.
options.defaultActor
ActorContext
Optional
Merged onto every request when no actor is supplied. Useful for system-initiated calls.
options.enforceActor
boolean
Optional
When true (default), policy includes requireActor; when false, only featureRegistered runs.
options.now
() => Date
Optional
Override the clock for deterministic tests.
Returns
LLMGateway
Object with complete(request) and completeStream(request) — both run the same policy + audit + event pipeline.
function createLLMGateway(options: LLMGatewayOptions): LLMGateway
Lower-level constructor. Use createLuxoGateway unless you need full control over the policy chain (e.g. swapping in a custom feature-allowlist or per-tenant gating).
Parameters
Param
Type
Required
Description
options.adapters
Record<string, ProviderAdapter>
Required
Provider adapters keyed by name.
options.routes
ModelRoute[]
Required
feature → provider+model bindings.
options.policy
PolicyRule[]
Optional
Policy chain. Defaults to defaultPolicy.
options.audit
AuditSink
Optional
Persistence sink. Defaults to noopAuditSink.
options.onEvent
(event) => void | Promise<void>
Optional
Lifecycle event hook.
options.onAuditError
(err, entry) => void
Optional
Called when the audit sink throws.
options.auditMode
'best-effort' | 'required'
Optional
Added in v0.9.0. Defaults to 'best-effort'. Set to 'required' to re-throw when the audit sink fails.
options.now
() => Date
Optional
Inject a clock.
Returns
LLMGateway
Object exposing complete(request) and completeStream(request).
Streaming sibling of gateway.complete. Runs the same policy chain + audit + event pipeline, then returns an LLMStreamResult whose stream field is an async iterable of LLMStreamChunk values. The audit row is written once at stream end with the assembled text; llm.call.started fires before the first chunk, llm.call.succeeded (with toolsUsed) after stop, llm.call.failed if the stream throws partway. Same auditMode behavior as complete. Adapters that don’t implement generateStream get a single-chunk fallback wrapping generate — consumers never see a missing method.
function consumeStream(
stream: AsyncIterable<LLMStreamChunk>,
): Promise<LLMStreamFinal>
Drain an LLMStreamChunk iterable into an LLMStreamFinal — assembled text, reasoningText, recorded tool calls, and stop reason. Useful for non-streaming clients of streaming adapters (e.g. wiring a buffered API on top of a streaming provider) and for tests. reasoning-delta chunks are not appended to text — reasoning stays metadata so audit summaries reflect the model’s reply, not its scratchpad.
function reportDroppedContent(
messages: LLMMessage[],
tools?: LLMTool[],
): DroppedContentReport
Inventory non-text content (images, tool_use, tool_result, declared tools[]) across a message array. Used by the deepseek and gemini adapters before stripping multi-modal content; exported so consumers can run the same audit on their input ahead of a call. Pairs with the strictTextOnly / onDroppedContent adapter options.
Thrown when the policy chain denies a call. The decisions array tells you exactly which rule failed and why — surface this back to the operator, never swallow it.
Per-feature tool allowlist. Compares tool.name ?? tool.type against the configured set. Pass enforceAll: true to deny any feature missing from the map — closes the default-allow surface for tool-using calls.
function createLLMJudge(options: LLMJudgeOptions): LLMJudge
LLM-as-a-judge for post-run quality scoring. Routes through your gateway under a separate feature, requests structured output via llmJudgeResponseFormat, and normalizes scores to 0..1 + 0..100 with dimension-level reasoning and evidence.
Parameters
Param
Type
Required
Description
options.gateway
LLMGateway
Required
An existing gateway. The judge re-uses it so audit + telemetry coverage is unified.
options.feature
string
Required
Feature route for judge calls. Recommend a separate route from your runtime calls (e.g. supplyagent.run.judge).
options.rubric
LLMJudgeRubric
Required
Dimensions to score, optional weights, and a passThreshold (defaults to 0.7).
options.actor
ActorContext
Optional
Default actor merged into every evaluate() call.
options.maxTokens
number
Optional
Defaults to 1200 — judges should be terse.
options.metadata
Record<string, unknown>
Optional
Recorded on every audit row.
options.parentRunId
string
Optional
Correlate judge calls back to the run that produced the subject.
options.temperature
number
Optional
Defaults to 0 — judges should be deterministic.
Returns
LLMJudge
Object with evaluate(input): Promise<LLMJudgeResult>.
function createSupabaseAuditSink(
options: SupabaseAuditSinkOptions,
): AuditSink
Persist audit rows to a Supabase table matching the schema in packages/agent-core/src/llm/schema.sql. supplyagent already has a compatible table at agent.audit_log (migration agent_005_audit_log.sql).
Parameters
Param
Type
Required
Description
options.client
SupabaseLikeClient
Required
A supabase-js or compatible client. Only the .from(table).insert() shape is used.
The shape of every value yielded by gateway.completeStream / adapter.generateStream. Tool-call streaming is normalized across providers — Anthropic input_json_delta, OpenAI Responses function_call_arguments.delta, and DeepSeek/OpenAI Chat Completions tool_calls[i].function.arguments deltas all surface as tool-input-delta chunks followed by a single tool-input-complete chunk with the parsed input. The reasoning-delta variant carries chain-of-thought text from reasoning models (DeepSeek-Reasoner, Anthropic extended-thinking); it’s metadata only and is not appended to the assembled response.
Returned synchronously from gateway.completeStream. The stream field is the chunk iterable; the remaining fields are correlation identifiers known before the first chunk arrives, so consumers can announce the request via SSE event: start before any text-delta.
interface LLMStreamFinal {
text: string; // assembled text-deltas (no reasoning)
reasoningText?: string; // assembled reasoning-deltas, if any
toolCalls: Array<{ toolCallId: string; toolName: string; input: unknown }>;
stopReason?: string;
}
Returned by consumeStream. Mirrors the parts of LLMResponse that a streaming consumer cares about. Use this when you want the buffered shape but the source is a streaming adapter.
Build a complete feedback POST handler in one call. The factory runs the same pipeline every consumer used to hand-write: authenticate → parse → validateFeedback → sink.write → optional telemetry. Project-specific fields plug in through enrichEntry(entry, ctx); observability stays decoupled via the optional telemetry callback (wire to recordFeedback from @luxoai-dev/observability).
Parameters
Param
Type
Required
Description
options.authenticate
(req) => Promise<{ actor: ActorContext } | null>
Required
Resolve the actor. Return null to reply 401.
options.sink
FeedbackSink
Required
Where validated entries are written. Use createSupabaseFeedbackSink for the default Supabase shape.
Inputs to createFeedbackRoute. Mirrors the same authenticate / parse / validate / write / telemetry pipeline every consumer used to write by hand — the factory removes ~80 lines of boilerplate from each agent.
Provider adapters · /adapters/[provider]
Each adapter is a small factory that returns a ProviderAdapter. Pass adapters to createLuxoGateway via the adapters map — the keys you choose are the values you reference in routes.
function geminiAdapter(options: GeminiAdapterOptions): ProviderAdapter
Google Gemini provider. Maps responseFormat to a JSON-schema generation_config. Implements generateStream against ?alt=sse.
Parameters
Param
Type
Required
Description
options.apiKey
string
Required
Gemini API key.
options.baseUrl
string
Optional
Custom base URL.
options.fetch
typeof fetch
Optional
Custom fetch.
options.defaultMaxTokens
number
Optional
Fallback when LLMRequest.maxTokens is unset.
options.strictTextOnly
boolean
Optional
Added in v0.9.0. When true, throws if any non-text content (images, tool_use, tool_result, tools[]) reaches the adapter — instead of silently stripping it.
options.onDroppedContent
(report: DroppedContentReport) => void
Optional
Added in v0.9.0. Receives an inventory of dropped non-text content before the request is sent. Default behavior without this callback is a structured console.warn.
function deepseekAdapter(options: DeepseekAdapterOptions): ProviderAdapter
DeepSeek provider. OpenAI-compatible API; deepseek-reasoner supports JSON mode for structured output. Implements generateStream and surfaces delta.reasoning_content as LLMStreamChunk { type: 'reasoning-delta' } when the model is deepseek-reasoner.
Parameters
Param
Type
Required
Description
options.apiKey
string
Required
DeepSeek API key.
options.baseUrl
string
Optional
Custom base URL.
options.fetch
typeof fetch
Optional
Custom fetch.
options.defaultMaxTokens
number
Optional
Fallback when LLMRequest.maxTokens is unset.
options.strictTextOnly
boolean
Optional
Added in v0.9.0. When true, throws if any non-text content reaches the adapter — instead of silently stripping it.
options.onDroppedContent
(report: DroppedContentReport) => void
Optional
Added in v0.9.0. Receives an inventory of dropped non-text content before the request is sent. Default behavior without this callback is a structured console.warn.
Implement this contract to add a new provider. Wire it via the adapters map of createLuxoGateway. generateStream is optional — gateways fall back to a single-chunk stream wrapping complete when the adapter doesn’t implement it.