Documentation

Everything you need to deploy, integrate, and operate the aacyn telemetry appliance, thus always in sync.

aacyn Quickstart

Deploy self-hosted observability in 5 minutes. Your data never leaves your network.


Prerequisites

RequirementWhy
Linux x86_64 (Ubuntu 22.04+, Debian 12+)AVX-512 SIMD + eBPF require modern Linux
Bun ≥ 1.1Runtime for the API control plane
A C compiler (gcc or clang)Compiles the native columnar store
512 MB free RAMStores 16M events in mmap'd memory
# Verify prerequisites
uname -m          # expect: x86_64
bun --version     # expect: 1.x
gcc --version     # expect: any version

What if it fails? If uname -m returns aarch64, you're on ARM — aacyn runs but without AVX-512 SIMD acceleration (falls back to NEON). If bun is not found, install it: curl -fsSL https://bun.sh/install | bash


Step 1: Download and Build

# Clone and build (requires active license — see aacyn.com)
cd /opt
tar xzf aacyn-v0.5.0.tar.gz
cd aacyn

# Build the native columnar store
just build-native

Expected output:

libaacyn build complete
  Platform: Linux/x86_64
  Library:  ../build/libaacyn.so
  CFLAGS:   -O3 ... -mavx512f -mavx512bw -mavx512vl

What if it fails? If you see error: unrecognized command-line option '-mavx512f', your CPU doesn't support AVX-512. Edit native/Makefile and remove the -mavx512* flags — the store will still work, just without SIMD-accelerated scans.


Step 2: Set Your License Key

When you purchased aacyn, you received an email with your AACYN_LICENSE_KEY. This is an Ed25519 signed license that verifies offline — no pings, no phone-home, no cloud dependency.

Set it as an environment variable:

export AACYN_LICENSE_KEY=eyJlIjoiLi4uIiwidGllciI6InBybyIsImV4cCI6Li4ufQ.base64url_signature

How it works: The license key is a cryptographically signed payload containing your email, tier (free/pro/team/enterprise), and expiry date. Your appliance verifies the signature locally using an embedded public key — in microseconds, with zero network requests. If the key is missing, aacyn runs in free tier mode (dashboard + JSON ingestion + Golden Signals). If the appliance has internet access, it will optionally auto-renew your license before expiry.

Verify:

# The key is a base64url-encoded JWT-like string (two parts separated by a dot)
echo $AACYN_LICENSE_KEY | grep -c '\.' 
# expect: 1 (the dot separator)

Step 3: Start the Server

cd /opt/aacyn/ts/apps/api
bun install
bun run src/index.ts

Expected output:

[libaacyn] Native store initialized: 16,000,000 capacity, 198.4MB mmap'd
[aacyn] Native FFI store active — V8 GC bypassed
aacyn API running at http://localhost:3001

What if it fails?

  • Native store unavailable, using V8 Map fallback → Run just build-native first. The server will still work but without SIMD acceleration.
  • EADDRINUSE → Port 3001 is in use. Set a different port: PORT=3002 bun run src/index.ts
  • No AACYN_LICENSE_KEY set — running in trial mode → This is fine for testing. Set the key when ready for production.

Health check (in another terminal):

curl -s http://localhost:3001/health | jq .

Expected response:

{
  "status": "ok",
  "version": "0.1.0",
  "uptime": 1234
}

What if it fails? If curl returns Connection refused, the server isn't running. Check the terminal where you started it for error messages.


Step 4: Send Your First Events

aacyn accepts telemetry via HTTP POST. Here's how to send events from your application:

JSON Batch Ingestion

This is the simplest integration path. Send an array of RED (Rate/Error/Duration) metric events:

curl -X POST http://localhost:3001/ingest/batch \
  -H "Content-Type: application/json" \
  -d '{
    "events": [
      {
        "traceId": "abc-123",
        "service": "payment-api",
        "durationMs": 42.5,
        "isError": false,
        "timestamp": 1710000000000
      },
      {
        "traceId": "def-456",
        "service": "payment-api",
        "durationMs": 1250.0,
        "isError": true,
        "timestamp": 1710000001000
      }
    ]
  }'

Expected response (HTTP 202):

{
  "accepted": 2,
  "timestamp": 1710000002000
}

Why 202? The events are accepted for processing, not synchronously committed. This allows the native store to batch writes for maximum throughput. Under normal operation, events are available for query within 1ms.

Event Schema

FieldTypeRequiredDescription
traceIdstringYesUnique identifier for the trace/request
servicestringYesName of the originating service (e.g., payment-api)
durationMsnumberYesRequest duration in milliseconds
isErrorbooleanYesWhether this event represents an error
timestampnumberYesUnix epoch milliseconds

Step 5: Query Your Data

Retrieve events by trace ID:

curl -s http://localhost:3001/query/trace/abc-123 | jq .

Expected response:

{
  "traceId": "abc-123",
  "service": "payment-api",
  "durationMs": 42.5,
  "isError": false,
  "timestamp": 1710000000000
}

What if it returns 404? The trace ID doesn't exist in the store. Double-check the ID you sent in Step 4. IDs are case-sensitive.


Step 6: Instrument Your Application

Node.js / TypeScript

// aacyn-client.ts — Drop this into any Node.js service
const AACYN_URL = process.env.AACYN_URL || "http://localhost:3001";
const SERVICE_NAME = process.env.SERVICE_NAME || "my-service";

interface AacynEvent {
  traceId: string;
  service: string;
  durationMs: number;
  isError: boolean;
  timestamp: number;
}

// Buffer events and flush every 100ms for efficiency
let buffer: AacynEvent[] = [];
let flushTimer: NodeJS.Timeout | null = null;

export function trackRequest(traceId: string, durationMs: number, isError = false) {
  buffer.push({
    traceId,
    service: SERVICE_NAME,
    durationMs,
    isError,
    timestamp: Date.now(),
  });

  // Auto-flush when buffer reaches 100 events or after 100ms
  if (buffer.length >= 100) flush();
  else if (!flushTimer) {
    flushTimer = setTimeout(flush, 100);
  }
}

async function flush() {
  if (buffer.length === 0) return;
  const events = buffer;
  buffer = [];
  if (flushTimer) { clearTimeout(flushTimer); flushTimer = null; }

  try {
    await fetch(`${AACYN_URL}/ingest/batch`, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ events }),
    });
  } catch (err) {
    // aacyn is down — events are lost, but your service keeps running.
    // This is intentional: observability should never break your app.
    console.warn(`[aacyn] flush failed: ${(err as Error).message}`);
  }
}

// Flush remaining events on shutdown
process.on("beforeExit", flush);

Usage in an Express/Hono/Elysia handler:

import { trackRequest } from "./aacyn-client";
import { randomUUID } from "crypto";

app.get("/checkout", async (req, res) => {
  const traceId = randomUUID();
  const start = performance.now();

  try {
    const result = await processCheckout(req);
    trackRequest(traceId, performance.now() - start, false);
    res.json(result);
  } catch (err) {
    trackRequest(traceId, performance.now() - start, true);
    res.status(500).json({ error: "Internal error" });
  }
});

Python

# aacyn_client.py
import requests, time, threading, uuid, os

AACYN_URL = os.getenv("AACYN_URL", "http://localhost:3001")
SERVICE_NAME = os.getenv("SERVICE_NAME", "my-service")

_buffer = []
_lock = threading.Lock()

def track_request(trace_id: str, duration_ms: float, is_error: bool = False):
    """Record a request event. Non-blocking, batched automatically."""
    with _lock:
        _buffer.append({
            "traceId": trace_id,
            "service": SERVICE_NAME,
            "durationMs": duration_ms,
            "isError": is_error,
            "timestamp": int(time.time() * 1000),
        })
        if len(_buffer) >= 100:
            _flush()

def _flush():
    global _buffer
    if not _buffer:
        return
    events, _buffer = _buffer, []
    try:
        requests.post(f"{AACYN_URL}/ingest/batch",
                      json={"events": events}, timeout=1)
    except Exception as e:
        pass  # Observability should never crash your app

# Usage:
# trace_id = str(uuid.uuid4())
# start = time.time()
# ... your logic ...
# track_request(trace_id, (time.time() - start) * 1000, is_error=False)

Go

// aacyn.go
package aacyn

import (
    "bytes"
    "encoding/json"
    "net/http"
    "os"
    "sync"
    "time"
)

var (
    aacynURL    = envOr("AACYN_URL", "http://localhost:3001")
    serviceName = envOr("SERVICE_NAME", "my-service")
    mu          sync.Mutex
    buffer      []map[string]interface{}
)

func TrackRequest(traceID string, durationMs float64, isError bool) {
    mu.Lock()
    defer mu.Unlock()
    buffer = append(buffer, map[string]interface{}{
        "traceId":    traceID,
        "service":    serviceName,
        "durationMs": durationMs,
        "isError":    isError,
        "timestamp":  time.Now().UnixMilli(),
    })
    if len(buffer) >= 100 {
        go flush()
    }
}

func flush() {
    mu.Lock()
    if len(buffer) == 0 { mu.Unlock(); return }
    events := buffer
    buffer = nil
    mu.Unlock()

    body, _ := json.Marshal(map[string]interface{}{"events": events})
    http.Post(aacynURL+"/ingest/batch", "application/json", bytes.NewReader(body))
}

func envOr(key, fallback string) string {
    if v := os.Getenv(key); v != "" { return v }
    return fallback
}

Architecture Overview

Data flow: Your application sends events via HTTP → Elysia validates and passes the pointer via bun:ffi → the C library writes directly into mmap'd memory (zero-copy) → AVX-512 SIMD scans the columnar store for queries. No intermediate databases, no disk I/O on the hot path, no garbage collection.


Troubleshooting

Server won't start

SymptomCauseFix
dlopen: libaacyn.so not foundNative store not compiledjust build-native
EADDRINUSE: port 3001Port in usePORT=3002 bun run src/index.ts
V8 Map fallback warninglibaacyn.so not in build/just build-native

Events not appearing in queries

SymptomCauseFix
accepted: 0Empty events arrayEnsure events is a non-empty array
HTTP 422Schema validation failCheck that all 5 required fields are present and correctly typed
trace_not_found (404)Wrong trace IDIDs are case-sensitive; verify with the ID from the ingest response

License issues

SymptomCauseFix
No license key — running in free tierNo AACYN_LICENSE_KEY setexport AACYN_LICENSE_KEY=<key from email>
Ed25519 license expiredLicense past expiry dateCheck email for renewed key, or reactivate at aacyn.com
Ed25519 signature invalidTampered or corrupted keyRe-copy the key exactly from your license email
Auto-renewal unreachableOptional heartbeat can't reach serverNo action needed — Ed25519 license works offline until expiry

Production Checklist


What's Next

Questions? Reply to your license email or email support@aacyn.com.

aacyn API Reference

Base URL: http://<your-appliance>:3001

All endpoints accept and return JSON unless otherwise noted. All timestamps are Unix epoch milliseconds unless marked Ns (nanoseconds).


Health

GET /health

Returns the current health status of the aacyn appliance. Use this as your load balancer health check endpoint.

Request: No body required.

Response (200):

{
  "status": "ok",
  "version": "0.1.0",
  "uptime": 86400000
}
FieldTypeDescription
status"ok" | "degraded" | "down"Current system health
versionstringSemantic version of the running appliance
uptimenumberMilliseconds since server start

When to use: Call this from your monitoring system every 30s. If status is not "ok", page on-call.


License

GET /v1/license/status

Returns the current license validation state.

Response (200):

{
  "valid": true,
  "tier": "pro",
  "expiresAt": 1776357214,
  "daysRemaining": 32,
  "reason": "ed25519_verified"
}
FieldTypeDescription
validbooleanWhether the appliance has an active license
tierstringCurrent tier: free, pro, team, or enterprise
expiresAtnumber?Unix epoch seconds when the license expires
daysRemainingnumber?Days until expiry
reasonstringOne of: ed25519_verified, heartbeat_active, heartbeat_grace, no_license

Offline operation: The license is verified locally using Ed25519 cryptographic signatures. No network requests are made. If the appliance has internet access, it optionally auto-renews the license before expiry.


Ingestion

aacyn has two ingestion endpoints. Use JSON batch for simplicity or binary for maximum throughput.

POST /ingest/batch — JSON Batch Ingestion

Accepts an array of RED (Rate/Error/Duration) metric events. This is the recommended starting point for most integrations.

Request:

{
  "events": [
    {
      "traceId": "abc-123",
      "service": "payment-api",
      "durationMs": 42.5,
      "isError": false,
      "timestamp": 1710000000000
    }
  ]
}
FieldTypeRequiredConstraintsDescription
traceIdstringYesNon-emptyUnique identifier for the trace/request
servicestringYesNon-emptyName of the originating service
durationMsnumberYes≥ 0Request duration in milliseconds
isErrorbooleanYesWhether this request resulted in an error
timestampnumberYes> 0Unix epoch milliseconds

Response (202 Accepted):

{
  "accepted": 1,
  "timestamp": 1710000002000
}
FieldTypeDescription
acceptednumberNumber of events written to the columnar store
timestampnumberServer timestamp at time of acceptance

Error Responses:

StatusCauseFix
422Schema validation failureCheck that all 5 required fields are present and correctly typed
500Internal store errorCheck server logs; the native store may have reached capacity

Performance: JSON batch ingestion sustains ~314K events/sec. For higher throughput, use binary ingestion.


POST /ingest/binary — Binary FlatBuffer Ingestion

Accepts a raw FlatBuffer binary payload for zero-parse, zero-copy ingestion. This is the high-performance path.

Request:

Content-Type: application/octet-stream
Body: Raw FlatBuffer binary (TelemetryBatch schema)

Response (202 Accepted):

{
  "accepted": 100,
  "timestamp": 1710000002000,
  "mode": "binary"
}

Error Responses:

StatusCauseFix
400Buffer too small (< 8 bytes)Ensure payload meets minimum size
501Native store not availableRun just build-native to compile libaacyn

Performance: Binary ingestion sustains 5.09M events/sec with 16ms p99 latency. See docs/binary-protocol.md for the FlatBuffer schema and payload generation.


POST /v1/events — Structured Telemetry Events

Accepts richly-typed telemetry events with metric, trace, and log payloads. This is the full-fidelity ingestion path for when you need more than RED metrics.

Request:

[
  {
    "id": "01234567-89ab-cdef-0123-456789abcdef",
    "timestamp": 1710000000000000000,
    "kind": "metric",
    "service": "payment-api",
    "host": "prod-01",
    "tags": { "region": "us-east-1", "env": "production" },
    "metric": {
      "name": "request_duration",
      "value": 42.5,
      "unit": "ms",
      "type": "histogram"
    }
  }
]
FieldTypeRequiredDescription
idstringYesUnique event ID (UUIDv7 recommended for time-sortability)
timestampnumberYesUnix epoch nanoseconds
kind"metric" | "trace" | "log"YesEvent classification
servicestringYesOriginating service name
hoststringYesHost identifier
tagsRecord<string, string>YesArbitrary key-value tags for filtering
metricMetricPayloadPresent when kind === "metric"
traceTracePayloadPresent when kind === "trace"
logLogPayloadPresent when kind === "log"

MetricPayload:

FieldTypeDescription
namestringMetric name (e.g., request_duration)
valuenumberMetric value
unitstringUnit of measurement (e.g., ms, bytes, count)
type"gauge" | "counter" | "histogram"Aggregation semantics

TracePayload:

FieldTypeDescription
traceIdstringDistributed trace ID
spanIdstringSpan ID within the trace
parentSpanIdstring?Parent span (omit for root spans)
operationNamestringName of the operation
durationnumberDuration in nanoseconds
status"ok" | "error"Span outcome
attributesRecordStructured span attributes

LogPayload:

FieldTypeDescription
level"debug" | "info" | "warn" | "error" | "fatal"Log severity
messagestringLog message
attributesRecordStructured log attributes

Response (200):

{
  "accepted": 1,
  "timestamp": 1710000002000
}

Query

POST /v1/query — SQL Query

Executes a SQL query against the columnar store.

Request:

{
  "sql": "SELECT service, count(*) FROM events WHERE isError = true GROUP BY service",
  "timeRange": {
    "startNs": 1710000000000000000,
    "endNs": 1710086400000000000
  },
  "limit": 100
}
FieldTypeRequiredDescription
sqlstringYesSQL query string
timeRangeobjectOptional time range filter
timeRange.startNsnumberStart of range (nanoseconds)
timeRange.endNsnumberEnd of range (nanoseconds)
limitnumberMaximum rows to return

Response (200):

{
  "columns": ["service", "count"],
  "rows": [
    ["payment-api", 42],
    ["auth-service", 7]
  ],
  "durationNs": 150000,
  "totalRows": 2
}
FieldTypeDescription
columnsstring[]Column names in the result set
rowsunknown[][]Row data matching the column order
durationNsnumberQuery execution time in nanoseconds
totalRowsnumberTotal matching rows (before limit)

GET /query/trace/:traceId — Trace Lookup

Performs an O(1) hash lookup for a specific trace by ID.

Request: Pass the trace ID as a URL parameter.

GET /query/trace/abc-123

Response (200):

{
  "traceId": "abc-123",
  "service": "payment-api",
  "durationMs": 42.5,
  "isError": false,
  "timestamp": 1710000000000
}

Response (404):

{
  "error": "trace_not_found",
  "traceId": "abc-123"
}

Why O(1)? The native store maintains a hash index on traceId for instant lookups regardless of store size. This is separate from the columnar layout used for analytical scans.


Webhooks

POST /webhooks/stripe

Receives Stripe webhook events for license lifecycle management. This endpoint is called by Stripe, not by your application.

Handled Events:

Response: 200 OK or 400 Error

Security: In production, configure STRIPE_WEBHOOK_SECRET to verify webhook signatures. Without it, any POST to this endpoint would be processed.

aacyn Configuration Reference

Every environment variable that aacyn reads, what it does, and what happens if you don't set it.


Quick Reference

VariableRequiredDefaultComponent
PORTNo3001API Server
AACYN_LICENSE_KEYNoLicense
AACYN_HEARTBEAT_URLNoProduction URLLicense
AACYN_BPF_OBJNoAuto-detectedeBPF
LICENSE_SALTYes (prod)aacyn-dev-saltLicense
SOVEREIGN_PRIVATE_KEYNoAuto-generatedLicense
STRIPE_SECRET_KEYYes (prod)Billing
STRIPE_WEBHOOK_SECRETYes (prod)Billing
STRIPE_SOVEREIGN_TIER_PRICE_IDYes (prod)Billing
RESEND_API_KEYNoEmail
EMAIL_FROMNoaacyn <license@aacyn.com>Email

API Server

PORT

The TCP port the aacyn API server listens on.

Default3001
ExamplePORT=8080
If unsetListens on port 3001

Tip: If you're running behind a reverse proxy (nginx, Caddy), keep the default and proxy from 443 → 3001.


License System

AACYN_LICENSE_KEY

The Ed25519 signed license key your appliance uses for offline verification. You received this in your purchase email.

DefaultNone (free tier)
Formatbase64url(payload).base64url(signature)
ExampleAACYN_LICENSE_KEY=eyJlIjoiLi4uIiwidGllciI6InBybyJ9.sig...
If unsetServer runs in free tier — dashboard, JSON ingestion, and Golden Signals enabled

How it works: On startup, the appliance verifies the Ed25519 signature locally using the embedded public key. The signed payload contains your email, tier (free/pro/team/enterprise), and expiry date. Zero network requests. If the appliance has internet access, it optionally pings the heartbeat server to auto-renew the license before expiry.

AACYN_PUBLIC_KEY

PEM-encoded Ed25519 public key used to verify license signatures locally.

DefaultNone (Ed25519 verification disabled)
FormatPEM string
If unsetFalls back to heartbeat-based validation

AACYN_HEARTBEAT_URL

URL of the Cloudflare Worker that handles optional license auto-renewal.

Defaulthttps://aacyn-heartbeat.nmeshed-test.workers.dev
ExampleAACYN_HEARTBEAT_URL=https://heartbeat.aacyn.com
If unsetUses the default production URL

Note: The heartbeat is optional. The Ed25519 license works fully offline. The heartbeat is a convenience for auto-renewing licenses before expiry — if the appliance can reach the internet, it will fetch a renewed license automatically.

LICENSE_SALT

Secret salt used to derive heartbeat keys from Stripe customer IDs via SHA-256(salt + customerId).

Defaultaacyn-dev-salt (development only)
FormatAny string; 64 hex chars recommended for production
ExampleLICENSE_SALT=9b4f5b5d2ec489d1...
If unsetUses the dev salt — keys will not match production Worker

[!CAUTION] This value must be identical between the API server and the Cloudflare Worker. If they differ, the server will derive different license keys than the Worker expects, and license validation will silently fail.

SOVEREIGN_PRIVATE_KEY

Base64-encoded Ed25519 private key (PEM format) used to mint offline-verifiable license keys.

DefaultAuto-generates an ephemeral keypair on startup (development)
FormatBase64-encoded PEM
If unsetGenerates a new keypair every restart — licenses from previous runs become unverifiable

Generate for production:

node -e "const c=require('crypto'); const {privateKey}=c.generateKeyPairSync('ed25519'); console.log(privateKey.export({type:'pkcs8',format:'pem'}).toString('base64'))"

eBPF

AACYN_BPF_OBJ

Absolute path to the compiled eBPF probe object file.

Default<project_root>/build/aacyn_probes.bpf.o (auto-detected)
ExampleAACYN_BPF_OBJ=/opt/aacyn/build/aacyn_probes.bpf.o
If unsetSearches for the BPF object relative to the project root
If file not foundeBPF is silently disabled; server runs normally

Requirements for eBPF: Linux kernel 5.8+, CAP_BPF (run as root), and libbpf-dev installed. See docs/ebpf.md for setup.


Stripe Billing

STRIPE_SECRET_KEY

Your Stripe API secret key. Used server-side to create Checkout Sessions.

DefaultNone
Formatsk_live_... (production) or sk_test_... (sandbox)
If unsetThe /api/checkout endpoint returns HTTP 503

STRIPE_WEBHOOK_SECRET

Signing secret for verifying Stripe webhook payloads. Prevents spoofed webhook events.

DefaultNone
Formatwhsec_...
If unsetWebhooks are processed without signature verification (unsafe in production)

Where to find: Stripe Dashboard → Developers → Webhooks → your endpoint → Signing secret → Reveal

STRIPE_SOVEREIGN_TIER_PRICE_ID

The Stripe Price ID for the Pro tier subscription (formerly "Sovereign").

DefaultNone
Formatprice_...
If unsetCheckout sessions default to Pro tier

STRIPE_PRO_PRICE_ID / STRIPE_TEAM_PRICE_ID / STRIPE_ENTERPRISE_PRICE_ID

Stripe Price IDs for each tier. Used by the webhook handler to determine which tier to embed in the Ed25519 license.

DefaultNone
Formatprice_...
If unsetAll checkouts default to Pro tier

Where to find: Stripe Dashboard → Products → aacyn [Tier Name] → Pricing → Price ID


Email (Resend)

RESEND_API_KEY

API key for sending license delivery emails via Resend.

DefaultNone
Formatre_...
If unsetLicense keys are logged to the server console instead of emailed

Development: Leave unset. License keys will be printed to stdout, which is sufficient for testing.

EMAIL_FROM

The sender address for license emails.

Defaultaacyn <license@aacyn.com>
ExampleEMAIL_FROM=aacyn <license@send.aacyn.com>
ConstraintMust match a verified domain in your Resend account

Environment File Hierarchy

aacyn uses three environment files, each for a different context:

.env.production   → Real secrets for deployed appliance (NEVER committed)
.env.local        → Local development defaults (gitignored)
.env.test         → Deterministic values for bun test (committed)

Load with: source .env.production && bun run src/index.ts

aacyn Binary Ingestion Protocol

Performance: 5.09M events/sec with 16ms p99 latency — 16× faster than JSON ingestion.

Use this protocol when JSON parsing overhead is unacceptable. The binary path achieves zero-parse, zero-copy ingestion by passing a raw memory pointer from Bun directly into the C columnar store via bun:ffi.


When to Use Binary vs JSON

JSON (/ingest/batch)Binary (/ingest/binary)
Throughput~314K events/sec5.09M events/sec
Latency (p99)~219ms16ms
Integration effortDrop-in HTTP POSTRequires FlatBuffer tooling
Use caseApplication telemetryHigh-frequency infrastructure metrics, log pipelines

Rule of thumb: If you're ingesting < 100K events/sec, use JSON. It's simpler and fast enough. Switch to binary when you're pushing the limits.


Wire Format

The binary protocol uses FlatBuffers for zero-copy serialization. Each payload is a TelemetryBatch containing an array of fixed-size event records.

Event Record Layout

Each event is exactly 16 bytes in the wire format:

Offset  Size  Type    Field
──────  ────  ──────  ─────────────
0       8     u64     timestamp (Unix epoch ms)
8       4     f32     durationMs
12      2     u16     flags (bit 0 = isError)
14      2     u16     padding (alignment)

Why 16 bytes? Cache-line aligned. On modern CPUs, 16B events fit exactly 4 per cache line (64B), maximizing L1/L2 cache efficiency during SIMD scans.

Batch Header

Offset  Size  Type    Field
──────  ────  ──────  ─────────────
0       4     u32     magic (0xAACE0001)
4       4     u32     event_count
8       N×16  bytes   event_records[]

Total payload size: 8 + (event_count × 16) bytes


Generating Binary Payloads

Using the Bundled Generator

cd /opt/aacyn
bun run benchmarks/generate_payload.ts

Expected output:

FlatBuffer Payload Generated
  Events:   100
  Bytes:    1656 (1.62 KB)
  Output:   benchmarks/payload.bin

Sending a Binary Payload

curl -X POST http://localhost:3001/ingest/binary \
  -H "Content-Type: application/octet-stream" \
  --data-binary @benchmarks/payload.bin

Expected response (202):

{
  "accepted": 100,
  "timestamp": 1710000002000,
  "mode": "binary"
}

What if it fails?

  • 400 Buffer too small → Payload must be ≥ 8 bytes (header only = 8 bytes)
  • 501 Binary ingestion requires native store → Run just build-native first

Building Your Own Payload (Node.js)

function buildBinaryPayload(events: Array<{timestamp: number, durationMs: number, isError: boolean}>): Buffer {
  const MAGIC = 0xAACE0001;
  const headerSize = 8;
  const eventSize = 16;
  const buf = Buffer.alloc(headerSize + events.length * eventSize);

  // Header
  buf.writeUInt32LE(MAGIC, 0);
  buf.writeUInt32LE(events.length, 4);

  // Events
  for (let i = 0; i < events.length; i++) {
    const offset = headerSize + i * eventSize;
    buf.writeBigUInt64LE(BigInt(events[i].timestamp), offset);
    buf.writeFloatLE(events[i].durationMs, offset + 8);
    buf.writeUInt16LE(events[i].isError ? 1 : 0, offset + 12);
    buf.writeUInt16LE(0, offset + 14); // padding
  }

  return buf;
}

// Usage
const payload = buildBinaryPayload([
  { timestamp: Date.now(), durationMs: 42.5, isError: false },
  { timestamp: Date.now(), durationMs: 1250, isError: true },
]);

await fetch("http://localhost:3001/ingest/binary", {
  method: "POST",
  headers: { "Content-Type": "application/octet-stream" },
  body: payload,
});

Building Your Own Payload (Python)

import struct
import requests

def build_binary_payload(events):
    """Build an aacyn binary payload from a list of event dicts."""
    MAGIC = 0xAACE0001
    header = struct.pack("<II", MAGIC, len(events))
    body = b"".join(
        struct.pack("<QfHH",
            event["timestamp"],
            event["duration_ms"],
            1 if event.get("is_error") else 0,
            0  # padding
        )
        for event in events
    )
    return header + body

payload = build_binary_payload([
    {"timestamp": 1710000000000, "duration_ms": 42.5, "is_error": False},
    {"timestamp": 1710000001000, "duration_ms": 1250.0, "is_error": True},
])

requests.post(
    "http://localhost:3001/ingest/binary",
    data=payload,
    headers={"Content-Type": "application/octet-stream"},
)

How It Works Internally

Key insight: There is no serialization/deserialization step. The raw bytes from the HTTP request body are passed as a pointer directly to the C function, which copies them into the mmap'd columnar store. This is why binary ingestion is 16× faster than JSON.

aacyn eBPF Probes — V2 Architecture

Zero-instrumentation kernel telemetry. eBPF probes intercept network syscalls at the kernel level and pipe events directly into the columnar store — no application changes required.

This is an optional advanced feature. The server works fully without eBPF.


What the Probes Capture

ProbeKernel AttachmentWhat It Records
trace_connect_entertracepoint/syscalls/sys_enter_connectOutbound TCP connections (dest IP, port, process), stashes fd + timestamp for exit
trace_connect_exittracepoint/syscalls/sys_exit_connectConnection latency, source IP via fd→socket→sock→skc_rcv_saddr CO-RE walk
trace_tcp_sendmsgkprobe/tcp_sendmsgBytes sent, source + dest IP from struct sock * param
aacyn_auto (accept4)tracepoint/syscalls/sys_enter_accept4Inbound connections — service auto-discovery

Translation: Every time any process on the machine opens a network connection or sends data, aacyn records it with nanosecond precision — including the process's container IP for topology graph merging — without touching application code.


V2 Architecture (Dual Ring Buffers + Observable Backpressure)

V1 → V2 Upgrade Summary

FeatureV1V2
Ring Buffers1 × 256KB events_ringbuf2: standard_events (256KB) + critical_errors (64KB)
Priority RoutingNone — errors mixed with telemetryemit_event(is_critical) routes by severity
BackpressureSilent drops, invisiblePer-CPU atomic drop_counters, surfaced in API + HUD
Source IPNot trackedskc_rcv_saddr via CO-RE socket introspection
Topology Merge3 disconnected subgraphsIP-correlated connected graph

Event Struct (network_event)

struct network_event {
  __u64 timestamp_ns;  /* bpf_ktime_get_ns() monotonic clock */
  __u32 pid;           /* Process ID */
  __u32 tgid;          /* Thread Group ID */
  __u32 dest_ip;       /* Destination IPv4 (network byte order) */
  __u32 source_ip;     /* Source IPv4 — container identity */
  __u16 dest_port;     /* Destination port (network byte order) */
  __u16 status;        /* 0=connect, 1=connected, 2=send, 3=connect_failed */
  __u64 bytes;         /* Bytes sent, or duration_ns on connect-exit */
  char comm[16];       /* Process name (TASK_COMM_LEN) */
} __attribute__((packed));

BPF Maps

MapTypeSizePurpose
standard_eventsRINGBUF256KBHigh-volume telemetry (connects, sends)
critical_errorsRINGBUF64KBFailed connects (non-zero, non-EINPROGRESS retval)
drop_countersPERCPU_ARRAY2 keys × N CPUsIndex 0: standard drops, Index 1: critical drops
connect_stateHASH65536 entriesPer-connect state: stashes fd + timestamp + dest for exit handler

Source IP Tracking (Socket Introspection)

Problem

In Docker, source_comm (connect-side) and portNames (accept-side) produce different node IDs for the same container — creating disconnected topology subgraphs.

Solution: CO-RE Socket Walk in trace_connect_exit

current → task_struct.files → files_struct.fdt → fdtable.fd[saved_fd]
  → file.private_data → socket.sk → sock.__sk_common.skc_rcv_saddr

This reads the socket's local IPv4 address after connect() completes (when the address is bound). Each Docker container has a unique bridge IP, giving a stable node identity.

Merge Algorithm (TypeScript)

// Build ip→comm from all edges where source_ip is nonzero
const ipToSource = new Map<string, string>();
for (const edge of edges)
  if (edge.sourceIp !== "0.0.0.0")
    ipToSource.set(edge.sourceIp, edge.source);

// Rename targets: dest_ip → resolved source_comm
for (const edge of edges) {
  const resolved = ipToSource.get(edge.destIp);
  if (resolved) edge.target = resolved;
}

Result: nginx → api (node) becomes nginx → node because edge "node" reports source_ip=172.18.0.3, and the nginx edge targets dest_ip=172.18.0.3.


Prerequisites

RequirementHow to CheckHow to Install
Linux kernel 5.8+uname -rUpgrade kernel or OS
CONFIG_BPF=yzgrep CONFIG_BPF /proc/config.gzRebuild kernel (most distros ship with BPF)
BTF (BPF Type Format)ls /sys/kernel/btf/vmlinuxInstall linux-headers-$(uname -r)
clangclang --versionapt install clang llvm
libbpf-devdpkg -l libbpf-devapt install libbpf-dev libelf-dev
root or CAP_BPFwhoamiRun as root or setcap cap_bpf+ep
# Install all prerequisites on Ubuntu/Debian
sudo apt update && sudo apt install -y clang llvm libbpf-dev libelf-dev linux-headers-$(uname -r)

Setup

Step 1: Build with eBPF Enabled

cd native
make clean && make EBPF=1

Expected output:

✓ BPF object compiled: ../build/aacyn_probes.bpf.o
cc ... -o ../build/libaacyn.so libaacyn.c -lbpf -lelf -lz

[!CAUTION] Linker ordering matters. -lbpf -lelf -lz must appear AFTER the source file. The Makefile handles this via LDLIBS.

Step 2: Start with Root

cd ts/apps/api
sudo bun run src/index.ts

Expected log:

[libaacyn] V2 eBPF probes attached: /opt/aacyn/build/aacyn_probes.bpf.o
  standard_events (256KB) + critical_errors (64KB) + drop_counters (Per-CPU)

Performance Impact

eBPF probes run in kernel space and add negligible overhead:

MetricWithout eBPFWith eBPFDelta
Binary ingestion (evt/sec)5,089,3644,882,072-4% (noise)
p95 latency12.73ms12.67msNo impact
p99 latency16.12ms15.78msNo impact

Conclusion: eBPF probes are free. The 4% variance is within run-to-run noise.


Disabling eBPF

make clean && make   # builds without EBPF=1, uses stub functions

The server logs [eBPF] No BPF object found and continues normally.

Contact and Support


Get Help

ChannelContactUse For
Technical supportsupport@aacyn.comAppliance setup, configuration, troubleshooting
Securitysecurity@aacyn.comVulnerability reports, security concerns
Feedbackfeedback@aacyn.comFeature requests, bug reports, general feedback
BillingStripe Customer PortalManage subscription, invoices, payment methods

Before You Contact Us

Many issues can be resolved with a quick check:

Appliance won't start

# Check if the process is running
pgrep -a bun

# Check the health endpoint
curl -s http://localhost:3001/health | jq .

If the health check returns Connection refused, the server isn't running. Restart it:

cd /opt/aacyn/ts/apps/api
bun run src/index.ts

Events not ingesting

# Send a test event
curl -X POST http://localhost:3001/ingest/batch \
  -H "Content-Type: application/json" \
  -d '{"events":[{"traceId":"test","service":"diag","durationMs":1,"isError":false,"timestamp":'$(date +%s000)'}]}'
ResponseMeaning
202 {"accepted":1}Ingestion is working — check your application code
422Request body doesn't match the event schema
500Internal error — the store may be full; restart to clear

License issues

curl -s http://localhost:3001/v1/license/status | jq .
StatusMeaningAction
ed25519_verifiedLicense verified offlineIssue is elsewhere
no_licenseKey not setSet AACYN_LICENSE_KEY environment variable
heartbeat_graceHeartbeat auto-renewal unreachableNo action needed — license works offline until expiry
expiredLicense past expiry dateCheck email for renewed key, or reactivate at aacyn.com

Security Policy

We take security seriously. If you discover a vulnerability:

  1. Email security@aacyn.com with a description of the issue
  2. Include steps to reproduce if possible
  3. We will acknowledge receipt within 24 hours
  4. We will not take legal action against good-faith security researchers

Service Status

The aacyn appliance runs entirely on your hardware. There are no mandatory external dependencies.