The shared signal layer for every Luxo service. Health + readiness contracts, Prometheus metrics with sensible buckets, redacted JSON logging, LLM telemetry tied to gateway events, feedback recording, monitoring-token-protected routes, service metric snapshots, and Next.js / Node convenience exports.
function initObservability(
config: Partial<ObservabilityConfig>,
): ObservabilityConfig
Set the process-wide observability identity (service, version, environment, node, OTLP endpoint). Idempotent — call once at boot. Subsequent reads via getObservabilityConfig() see the merged result.
Parameters
Param
Type
Required
Description
config.serviceName
string
Optional
Logical service name reported on every log/metric label.
config.version
string
Optional
Build version. Defaults from APP_VERSION env.
config.environment
string
Optional
production | staging | dev. Defaults from NODE_ENV.
function createHealthResponse(
options?: HealthRouteOptions,
): Promise<Response>
Same as buildHealthPayload but wraps the result in a Web Response with the right status code and content-type. The Next.js helpers in /next call this internally.
function createLogger(
baseContext?: LogContext,
options?: LoggerOptions,
): Logger
Structured JSON logger that auto-injects service/version/env from observability config. Redacts a default set of sensitive keys (authorization, cookie, password, secret, ...) plus any keys you add. Since v0.6.0 redact keys are matched case-insensitively in a consistent way (e.g. 'bearerToken' matches 'bearertoken', 'BEARERTOKEN'); exact-match semantics are preserved, so keys that only contain a sensitive substring (e.g. 'secret_question', 'token_expiry') are still kept.
Parameters
Param
Type
Required
Description
baseContext
LogContext
Optional
Default fields merged into every log entry (e.g. { component: 'gateway' }).
options.minLevel
'debug'|'info'|'warn'|'error'
Optional
Lowest level to emit. Defaults to 'info'.
options.redactKeys
string[]
Optional
Additional keys to redact in nested objects.
options.sink
(entry) => void
Optional
Replace console.log with a custom sink (file, ndjson stream, etc.).
Returns
Logger
Object with debug/info/warn/error and child(context) for context inheritance.
In-process Prometheus-text metrics. The package exposes defaultMetrics (a singleton) — use that unless you have a reason to isolate registries (e.g. a worker pool with per-worker metrics).
Process-wide registry. The /next createMetricsRoute() and the LLM telemetry helpers default to this. You can pass a custom registry to any helper that accepts options.metrics.
Drop-in handler for the gateway’s onEvent hook. Increments the right counters, observes latency histograms, manages an in-flight gauge, and writes structured logs — all keyed by feature / provider / model / outcome. Since v0.6.0 the handler also passes errorMessage, toolsRequested, and toolsUsed through to the log entry — useful for auditing tool-using calls and the actual upstream error string (not just the type).
Parameters
Param
Type
Required
Description
identity.serviceName
string
Optional
Override service label per handler.
identity.environment
string
Optional
Override environment label.
identity.component
string
Optional
Override component label.
identity.nodeName
string
Optional
Override node label.
options.metrics
MetricsRegistry
Optional
Custom registry. Defaults to defaultMetrics.
options.logger
Logger
Optional
Custom logger. Defaults to a child of createLogger().
Returns
(event) => void
Pass directly to createLuxoGateway({ onEvent: ... }).
Lower-level helper when you call providers directly (without the gateway). Emits the start gauge increment immediately and returns a 'finish' function — call it exactly once on success or failure.
function recordLLMCall(input: LLMCallRecord, options?: LLMTelemetryOptions): void
Bump luxo_llm_requests_total + observe latency on luxo_llm_latency_seconds. Outcome must be one of "success" | "error" | "policy_blocked". As of v0.7.0, when toolsRequested or toolsUsed are present on the record, this also bumps luxo_llm_tool_use_total with one series per tool and role: "requested" | "used" — gives dashboards both "which tools the gateway exposed" and "which tools the model actually invoked".
function recordFeedback(input: FeedbackRecord, options?: FeedbackTelemetryOptions): void
Record human-feedback metrics (luxo_feedback_total counter and a per-score gauge). Pair with @luxoai-dev/agent-core/feedback's validateFeedback to keep validation and metrics aligned.
function recordLuxoAction(
input: LuxoActionRecord,
options?: LuxoActionTelemetryOptions,
): void
Record Luxo chat action telemetry: one counter series per { actionId, status, surface, service_name }. Emits a structured luxo.action.proposed / luxo.action.executed log line in addition to bumping luxo_action_total. Wire "proposed" from the chat route's onActionsProposed hook and "executed" from the action beacon route. action_id labels come from each app's bounded registry vocabulary — cardinality stays controlled.
Parameters
Param
Type
Required
Description
input.actionId
string
Required
Action id from your LUXO_ACTION_REGISTRY (e.g. 'simulator.loadBell').
input.status
'proposed' | 'executed'
Required
'proposed' = LLM emitted the directive (server). 'executed' = user clicked the chip (client beacon).
input.surface
string
Optional
Surface key (e.g. 'review', 'editor'). Surface label in dashboards.
input.serviceName
string
Optional
Falls back to getObservabilityConfig().serviceName when omitted.
input.conversationId
string
Optional
Echoed into the log line for join with chat history.
input.requestId
string
Optional
LLM gateway requestId — joins to audit_log + feedback row.
input.userId
string
Optional
Tenant-scoped actor id; echoed into the log line, not used as a metric label.
Structured-event logger. The first argument is an event name (snake_case.dot.style); the second is a context bag. JSON-encoded and redacted before being sent to the sink.
function createReadyRoute(options?: HealthRouteOptions): () => Promise<Response>
Same shape as createHealthRoute but conventionally for /api/ready (deeper checks: warm caches, recent migrations, etc.). Mount on a separate route from /api/health for orchestrators that distinguish liveness from readiness.
Bearer-token check for protected metrics routes. Returns null on success and a 401 Response when the Authorization: Bearer ... header is missing or wrong. Absorbs the isMonitoringAuthorized helper every consumer hand-rolled. Reads MONITORING_TOKEN from process.env by default; pass token explicitly for tests. In development the dev-mode fallback allows unauthenticated reads so a local Prometheus can scrape.
Parameters
Param
Type
Required
Description
options.token
string
Optional
Override the token. Defaults to process.env.MONITORING_TOKEN.
options.allowInDev
boolean
Optional
Allow unauthenticated reads when NODE_ENV !== 'production'. Defaults to true.
options.envFlag
string
Optional
Env var consulted for the token. Defaults to 'MONITORING_TOKEN'.
Returns
(request) => Response | null
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// app/api/metrics/route.tsimport { createMetricsRoute, createMonitoringAuthorizer, metricsResponse,} from "@luxoai-dev/observability/next";const authorize = createMonitoringAuthorizer();const metrics = createMetricsRoute();export async function GET(request: Request) { const denied = authorize(request); if (denied) return denied; return metrics();}
Convenience wrapper that returns a Prometheus-text Response with content-type: text/plain; version=0.0.4 and cache-control: no-store. Use this when you compose a custom metrics body (e.g. concatenating buildServiceMetricsSnapshot with defaultMetrics.renderPrometheus()) instead of returning the registry directly.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
// app/api/metrics/route.ts (custom composition)import { defaultMetrics } from "@luxoai-dev/observability";import { metricsResponse } from "@luxoai-dev/observability/next";import { buildServiceMetricsSnapshot } from "@luxoai-dev/observability/node";export function GET() { const snapshot = buildServiceMetricsSnapshot({ serviceName: "support-assistant", dependencies: { openai: !!process.env.OPENAI_API_KEY }, }); return metricsResponse(snapshot + defaultMetrics.renderPrometheus());}
Node helpers · /node
Process-level helpers that don't fit the Next.js Response shape. Use these in long-lived workers, schedulers, or plain Node servers.
import { ... } from "@luxoai-dev/observability/node";
function initNodeObservability(
config: Partial<ObservabilityConfig>,
): ObservabilityConfig
Calls initObservability, logs an 'observability.initialized' line, and starts the default process-metrics sampler. Use as the single boot call in plain Node entry points.
function startProcessMetrics(
registry?: MetricsRegistry,
intervalMs?: number,
): ProcessMetricsHandle
Sample memoryUsage() and uptime every intervalMs (default 10s) and write to process_memory_bytes (with kind: rss/heap_used/heap_total) and process_uptime_seconds. Returns a handle with stop().
function buildServiceMetricsSnapshot(
input: ServiceMetricsSnapshotInput,
): string
Pull-based service metrics for /api/metrics routes that want an up-to-the-second snapshot without waiting for the next startProcessMetrics tick. Emits app_info (service, version, environment, node), app_dependency_configured (one series per declared dependency), and standard Node.js memory / CPU / event-loop gauges. Concatenate with defaultMetrics.renderPrometheus() — no parallel registry needed.
Parameters
Param
Type
Required
Description
input.serviceName
string
Required
Service name label on app_info.
input.version
string
Optional
Service version. Defaults to process.env.APP_VERSION.
input.environment
string
Optional
Environment label. Defaults to process.env.NODE_ENV.
input.nodeName
string
Optional
Hostname / pod label. Defaults to process.env.NODE_NAME or os.hostname().
input.dependencies
Record<string, boolean>
Optional
Configured-or-not flags rendered as app_dependency_configured{name=…}.
Returns
string
Prometheus-text body. Concatenate with other rendered registries.
example.tsTypeScriptExample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import { buildServiceMetricsSnapshot } from "@luxoai-dev/observability/node";import { defaultMetrics } from "@luxoai-dev/observability";import { metricsResponse } from "@luxoai-dev/observability/next";export function GET() { const snapshot = buildServiceMetricsSnapshot({ serviceName: "support-assistant", dependencies: { openai: Boolean(process.env.OPENAI_API_KEY), supabase: Boolean(process.env.SUPABASE_URL), }, }); return metricsResponse(snapshot + defaultMetrics.renderPrometheus());}