Core Concepts

Performance

Benchmark methodology, current numbers, and how evlog tracks performance regressions. Built for enterprise workloads with sub-microsecond overhead per request.

evlog is designed for production. Every operation is benchmarked and tracked against regressions in CI. This page documents the methodology, current numbers, and what they mean for your application.

Methodology

Benchmarks run with Vitest bench (powered by tinybench):

  • Each benchmark runs for 500ms after JIT warmup
  • Results report ops/sec, mean, p75, p99, p995, and p999 latency
  • All benchmarks run in silent mode (silent: true) to isolate the library overhead from I/O
  • JSON output is saved for CI comparison between commits

Run benchmarks locally:

cd packages/evlog
bun run bench

Core operations

These benchmarks measure the cost of evlog's fundamental building blocks in isolation.

Logger creation

Operationops/secMean
createLogger() (no context)~18M0.06µs
createLogger() (shallow context)~19M0.05µs
createLogger() (nested context)~18M0.06µs
createRequestLogger()~18M0.06µs

Creating a logger is essentially free — it's a closure over a plain object spread. No allocation pressure.

log.set() — accumulating context

Operationops/secMean
Shallow merge (3 fields)~12M0.08µs
Shallow merge (10 fields)~11M0.09µs
Deep nested merge~7M0.14µs
4 sequential set() calls~4M0.24µs

set() uses deepDefaults — a recursive merge that preserves existing values. The cost scales linearly with object depth, not breadth.

log.emit() — building the wide event

Operationops/secMean
Emit minimal event~1.6M0.6µs
Emit with context (typical request)~1M1.0µs
Emit with error~62K16µs
Full lifecycle (create + 3 sets + emit)~937K1.1µs

The full request lifecycle — create a logger, accumulate context across 3 set() calls, and emit — costs about 1 microsecond. That's ~0.001ms of overhead per request.

emit with error is significantly slower (~62K ops/sec) because Error.captureStackTrace() is an expensive V8 operation. This is expected — stack trace capture costs ~15µs regardless of library. This only runs when errors are thrown, not on every request.

Payload size scaling

Payloadops/secMean
Small (2 fields)~1.2M0.8µs
Medium (50 fields)~118K8.5µs
Large (200 nested fields)~27K37µs

Wide events with 50+ fields remain fast. Even at 200 deeply nested fields, emit takes under 40µs — well within budget for any HTTP request.

Formatting

Modeops/secMean
Silent (event build only)~1.4M0.7µs
JSON serialization (production)~1.4M0.7µs
Pretty print (development)~1.4M0.7µs
Raw JSON.stringify (baseline)~2M0.5µs

evlog adds roughly 30% overhead over raw JSON.stringify — the difference is new Date().toISOString(), the spread operator for building the WideEvent, and the sampling check.

In development, the pretty printer with ANSI colors runs at the same speed when console output is mocked — the actual I/O (writing to stdout) is the bottleneck, not the formatting logic.

Sampling

Operationops/secMean
shouldKeep() — no match~41M0.02µs
shouldKeep() — status match~40M0.03µs
shouldKeep() — duration match~42M0.02µs
shouldKeep() — path glob match~41M0.02µs
Full emit with head + tail sampling~4.8M0.2µs

Sampling adds zero measurable overhead. Both head sampling (random percentage) and tail sampling (condition evaluation) complete in under 30 nanoseconds. Even path glob matching with matchesPattern() is negligible.

Enrichers

Enricherops/secMean
User Agent (Chrome)~2.5M0.4µs
User Agent (Firefox)~4M0.25µs
User Agent (Googlebot)~4.5M0.22µs
Geo (Vercel headers)~5.4M0.18µs
Geo (Cloudflare)~1M1.0µs
Request Size~26M0.04µs
Trace Context~4.6M0.22µs
All enrichers combined~442K2.3µs
All enrichers (no headers)~1.9M0.5µs

Running the full enricher pipeline — User Agent parsing, Geo extraction, Request Size, and Trace Context — adds about 2.3µs per request. The User Agent regex parsing is the most expensive enricher.

When headers are absent (common for internal/health check requests), enrichers short-circuit immediately.

Error handling

Operationops/secMean
createError() (string)~216K4.6µs
createError() (full options)~208K4.8µs
parseError() (EvlogError)~15M0.07µs
parseError() (plain Error)~41M0.02µs
Round-trip (create + parse)~164K6.1µs
toJSON()~12M0.08µs
JSON.stringify()~2.3M0.43µs

createError() costs ~5µs, dominated by V8's Error.captureStackTrace(). parseError() is essentially free — it's just property access.

Real-world overhead

For a typical API request that creates a logger, sets context 3 times, and emits:

ComponentCost
Logger creation0.06µs
set() calls0.24µs
emit() (silent)0.7µs
Sampling evaluation0.02µs
Full enricher pipeline2.3µs
Total evlog overhead~3.3µs

That's 0.003ms per request — orders of magnitude below any HTTP framework or database overhead.

CI regression tracking

Every pull request that touches packages/evlog/src/ or packages/evlog/bench/ automatically runs benchmarks against the main branch. A comparison report is posted as a PR comment showing:

  • ops/sec delta for every benchmark
  • p99 latency changes
  • Regressions (>10% slower) are flagged with a warning

This ensures performance never silently degrades across releases.

Running benchmarks

# Run all benchmarks with table output
bun run bench

# Export results as JSON
bun run bench:json

# Compare two benchmark runs
bun bench/compare.ts baseline.json current.json

Benchmark files live in packages/evlog/bench/:

FileWhat it measures
logger.bench.tscreateLogger, log.set(), log.emit(), payload sizes
format.bench.tsJSON vs pretty print vs silent mode
sampling.bench.tsHead sampling, tail sampling, combined
enrichers.bench.tsPer-enricher cost, full pipeline
errors.bench.tscreateError, parseError, serialization