Everything you need to deploy, integrate, and operate the aacyn telemetry appliance, thus always in sync.
Deploy self-hosted observability in 5 minutes.
ReferenceEvery endpoint, field, and status code.
ReferenceAll environment variables with defaults.
AdvancedThe 5M evt/sec FlatBuffer wire format.
AdvancedZero-instrumentation kernel telemetry.
SupportGet help, report issues, and self-service troubleshooting.
Deploy self-hosted observability in 5 minutes. Your data never leaves your network.
| Requirement | Why |
|---|---|
| Linux x86_64 (Ubuntu 22.04+, Debian 12+) | AVX-512 SIMD + eBPF require modern Linux |
| Bun ≥ 1.1 | Runtime for the API control plane |
A C compiler (gcc or clang) | Compiles the native columnar store |
| 512 MB free RAM | Stores 16M events in mmap'd memory |
# Verify prerequisites
uname -m # expect: x86_64
bun --version # expect: 1.x
gcc --version # expect: any version
What if it fails? If
uname -mreturnsaarch64, you're on ARM — aacyn runs but without AVX-512 SIMD acceleration (falls back to NEON). Ifbunis not found, install it:curl -fsSL https://bun.sh/install | bash
# Clone and build (requires active license — see aacyn.com)
cd /opt
tar xzf aacyn-v0.5.0.tar.gz
cd aacyn
# Build the native columnar store
just build-native
Expected output:
libaacyn build complete
Platform: Linux/x86_64
Library: ../build/libaacyn.so
CFLAGS: -O3 ... -mavx512f -mavx512bw -mavx512vl
What if it fails? If you see
error: unrecognized command-line option '-mavx512f', your CPU doesn't support AVX-512. Editnative/Makefileand remove the-mavx512*flags — the store will still work, just without SIMD-accelerated scans.
When you purchased aacyn, you received an email with your AACYN_LICENSE_KEY. This is an Ed25519 signed license that verifies offline — no pings, no phone-home, no cloud dependency.
Set it as an environment variable:
export AACYN_LICENSE_KEY=eyJlIjoiLi4uIiwidGllciI6InBybyIsImV4cCI6Li4ufQ.base64url_signature
How it works: The license key is a cryptographically signed payload containing your email, tier (free/pro/team/enterprise), and expiry date. Your appliance verifies the signature locally using an embedded public key — in microseconds, with zero network requests. If the key is missing, aacyn runs in free tier mode (dashboard + JSON ingestion + Golden Signals). If the appliance has internet access, it will optionally auto-renew your license before expiry.
Verify:
# The key is a base64url-encoded JWT-like string (two parts separated by a dot)
echo $AACYN_LICENSE_KEY | grep -c '\.'
# expect: 1 (the dot separator)
cd /opt/aacyn/ts/apps/api
bun install
bun run src/index.ts
Expected output:
[libaacyn] Native store initialized: 16,000,000 capacity, 198.4MB mmap'd
[aacyn] Native FFI store active — V8 GC bypassed
aacyn API running at http://localhost:3001
What if it fails?
Native store unavailable, using V8 Map fallback→ Runjust build-nativefirst. The server will still work but without SIMD acceleration.EADDRINUSE→ Port 3001 is in use. Set a different port:PORT=3002 bun run src/index.tsNo AACYN_LICENSE_KEY set — running in trial mode→ This is fine for testing. Set the key when ready for production.
Health check (in another terminal):
curl -s http://localhost:3001/health | jq .
Expected response:
{
"status": "ok",
"version": "0.1.0",
"uptime": 1234
}
What if it fails? If
curlreturnsConnection refused, the server isn't running. Check the terminal where you started it for error messages.
aacyn accepts telemetry via HTTP POST. Here's how to send events from your application:
This is the simplest integration path. Send an array of RED (Rate/Error/Duration) metric events:
curl -X POST http://localhost:3001/ingest/batch \
-H "Content-Type: application/json" \
-d '{
"events": [
{
"traceId": "abc-123",
"service": "payment-api",
"durationMs": 42.5,
"isError": false,
"timestamp": 1710000000000
},
{
"traceId": "def-456",
"service": "payment-api",
"durationMs": 1250.0,
"isError": true,
"timestamp": 1710000001000
}
]
}'
Expected response (HTTP 202):
{
"accepted": 2,
"timestamp": 1710000002000
}
Why 202? The events are accepted for processing, not synchronously committed. This allows the native store to batch writes for maximum throughput. Under normal operation, events are available for query within 1ms.
| Field | Type | Required | Description |
|---|---|---|---|
traceId | string | Yes | Unique identifier for the trace/request |
service | string | Yes | Name of the originating service (e.g., payment-api) |
durationMs | number | Yes | Request duration in milliseconds |
isError | boolean | Yes | Whether this event represents an error |
timestamp | number | Yes | Unix epoch milliseconds |
Retrieve events by trace ID:
curl -s http://localhost:3001/query/trace/abc-123 | jq .
Expected response:
{
"traceId": "abc-123",
"service": "payment-api",
"durationMs": 42.5,
"isError": false,
"timestamp": 1710000000000
}
What if it returns 404? The trace ID doesn't exist in the store. Double-check the ID you sent in Step 4. IDs are case-sensitive.
// aacyn-client.ts — Drop this into any Node.js service
const AACYN_URL = process.env.AACYN_URL || "http://localhost:3001";
const SERVICE_NAME = process.env.SERVICE_NAME || "my-service";
interface AacynEvent {
traceId: string;
service: string;
durationMs: number;
isError: boolean;
timestamp: number;
}
// Buffer events and flush every 100ms for efficiency
let buffer: AacynEvent[] = [];
let flushTimer: NodeJS.Timeout | null = null;
export function trackRequest(traceId: string, durationMs: number, isError = false) {
buffer.push({
traceId,
service: SERVICE_NAME,
durationMs,
isError,
timestamp: Date.now(),
});
// Auto-flush when buffer reaches 100 events or after 100ms
if (buffer.length >= 100) flush();
else if (!flushTimer) {
flushTimer = setTimeout(flush, 100);
}
}
async function flush() {
if (buffer.length === 0) return;
const events = buffer;
buffer = [];
if (flushTimer) { clearTimeout(flushTimer); flushTimer = null; }
try {
await fetch(`${AACYN_URL}/ingest/batch`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ events }),
});
} catch (err) {
// aacyn is down — events are lost, but your service keeps running.
// This is intentional: observability should never break your app.
console.warn(`[aacyn] flush failed: ${(err as Error).message}`);
}
}
// Flush remaining events on shutdown
process.on("beforeExit", flush);
Usage in an Express/Hono/Elysia handler:
import { trackRequest } from "./aacyn-client";
import { randomUUID } from "crypto";
app.get("/checkout", async (req, res) => {
const traceId = randomUUID();
const start = performance.now();
try {
const result = await processCheckout(req);
trackRequest(traceId, performance.now() - start, false);
res.json(result);
} catch (err) {
trackRequest(traceId, performance.now() - start, true);
res.status(500).json({ error: "Internal error" });
}
});
# aacyn_client.py
import requests, time, threading, uuid, os
AACYN_URL = os.getenv("AACYN_URL", "http://localhost:3001")
SERVICE_NAME = os.getenv("SERVICE_NAME", "my-service")
_buffer = []
_lock = threading.Lock()
def track_request(trace_id: str, duration_ms: float, is_error: bool = False):
"""Record a request event. Non-blocking, batched automatically."""
with _lock:
_buffer.append({
"traceId": trace_id,
"service": SERVICE_NAME,
"durationMs": duration_ms,
"isError": is_error,
"timestamp": int(time.time() * 1000),
})
if len(_buffer) >= 100:
_flush()
def _flush():
global _buffer
if not _buffer:
return
events, _buffer = _buffer, []
try:
requests.post(f"{AACYN_URL}/ingest/batch",
json={"events": events}, timeout=1)
except Exception as e:
pass # Observability should never crash your app
# Usage:
# trace_id = str(uuid.uuid4())
# start = time.time()
# ... your logic ...
# track_request(trace_id, (time.time() - start) * 1000, is_error=False)
// aacyn.go
package aacyn
import (
"bytes"
"encoding/json"
"net/http"
"os"
"sync"
"time"
)
var (
aacynURL = envOr("AACYN_URL", "http://localhost:3001")
serviceName = envOr("SERVICE_NAME", "my-service")
mu sync.Mutex
buffer []map[string]interface{}
)
func TrackRequest(traceID string, durationMs float64, isError bool) {
mu.Lock()
defer mu.Unlock()
buffer = append(buffer, map[string]interface{}{
"traceId": traceID,
"service": serviceName,
"durationMs": durationMs,
"isError": isError,
"timestamp": time.Now().UnixMilli(),
})
if len(buffer) >= 100 {
go flush()
}
}
func flush() {
mu.Lock()
if len(buffer) == 0 { mu.Unlock(); return }
events := buffer
buffer = nil
mu.Unlock()
body, _ := json.Marshal(map[string]interface{}{"events": events})
http.Post(aacynURL+"/ingest/batch", "application/json", bytes.NewReader(body))
}
func envOr(key, fallback string) string {
if v := os.Getenv(key); v != "" { return v }
return fallback
}
Data flow: Your application sends events via HTTP → Elysia validates and passes the pointer via bun:ffi → the C library writes directly into mmap'd memory (zero-copy) → AVX-512 SIMD scans the columnar store for queries. No intermediate databases, no disk I/O on the hot path, no garbage collection.
| Symptom | Cause | Fix |
|---|---|---|
dlopen: libaacyn.so not found | Native store not compiled | just build-native |
EADDRINUSE: port 3001 | Port in use | PORT=3002 bun run src/index.ts |
V8 Map fallback warning | libaacyn.so not in build/ | just build-native |
| Symptom | Cause | Fix |
|---|---|---|
accepted: 0 | Empty events array | Ensure events is a non-empty array |
| HTTP 422 | Schema validation fail | Check that all 5 required fields are present and correctly typed |
trace_not_found (404) | Wrong trace ID | IDs are case-sensitive; verify with the ID from the ingest response |
| Symptom | Cause | Fix |
|---|---|---|
No license key — running in free tier | No AACYN_LICENSE_KEY set | export AACYN_LICENSE_KEY=<key from email> |
Ed25519 license expired | License past expiry date | Check email for renewed key, or reactivate at aacyn.com |
Ed25519 signature invalid | Tampered or corrupted key | Re-copy the key exactly from your license email |
Auto-renewal unreachable | Optional heartbeat can't reach server | No action needed — Ed25519 license works offline until expiry |
AACYN_LICENSE_KEY environment variablesudo if eBPF probes are needed (optional)AACYN_URL to the appliance IPdocs/binary-protocol.md)docs/ebpf.md)http://localhost:3001/dashboardQuestions? Reply to your license email or email support@aacyn.com.
Base URL:
http://<your-appliance>:3001All endpoints accept and return JSON unless otherwise noted. All timestamps are Unix epoch milliseconds unless marked
Ns(nanoseconds).
GET /healthReturns the current health status of the aacyn appliance. Use this as your load balancer health check endpoint.
Request: No body required.
Response (200):
{
"status": "ok",
"version": "0.1.0",
"uptime": 86400000
}
| Field | Type | Description |
|---|---|---|
status | "ok" | "degraded" | "down" | Current system health |
version | string | Semantic version of the running appliance |
uptime | number | Milliseconds since server start |
When to use: Call this from your monitoring system every 30s. If
statusis not"ok", page on-call.
GET /v1/license/statusReturns the current license validation state.
Response (200):
{
"valid": true,
"tier": "pro",
"expiresAt": 1776357214,
"daysRemaining": 32,
"reason": "ed25519_verified"
}
| Field | Type | Description |
|---|---|---|
valid | boolean | Whether the appliance has an active license |
tier | string | Current tier: free, pro, team, or enterprise |
expiresAt | number? | Unix epoch seconds when the license expires |
daysRemaining | number? | Days until expiry |
reason | string | One of: ed25519_verified, heartbeat_active, heartbeat_grace, no_license |
Offline operation: The license is verified locally using Ed25519 cryptographic signatures. No network requests are made. If the appliance has internet access, it optionally auto-renews the license before expiry.
aacyn has two ingestion endpoints. Use JSON batch for simplicity or binary for maximum throughput.
POST /ingest/batch — JSON Batch IngestionAccepts an array of RED (Rate/Error/Duration) metric events. This is the recommended starting point for most integrations.
Request:
{
"events": [
{
"traceId": "abc-123",
"service": "payment-api",
"durationMs": 42.5,
"isError": false,
"timestamp": 1710000000000
}
]
}
| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
traceId | string | Yes | Non-empty | Unique identifier for the trace/request |
service | string | Yes | Non-empty | Name of the originating service |
durationMs | number | Yes | ≥ 0 | Request duration in milliseconds |
isError | boolean | Yes | — | Whether this request resulted in an error |
timestamp | number | Yes | > 0 | Unix epoch milliseconds |
Response (202 Accepted):
{
"accepted": 1,
"timestamp": 1710000002000
}
| Field | Type | Description |
|---|---|---|
accepted | number | Number of events written to the columnar store |
timestamp | number | Server timestamp at time of acceptance |
Error Responses:
| Status | Cause | Fix |
|---|---|---|
| 422 | Schema validation failure | Check that all 5 required fields are present and correctly typed |
| 500 | Internal store error | Check server logs; the native store may have reached capacity |
Performance: JSON batch ingestion sustains ~314K events/sec. For higher throughput, use binary ingestion.
POST /ingest/binary — Binary FlatBuffer IngestionAccepts a raw FlatBuffer binary payload for zero-parse, zero-copy ingestion. This is the high-performance path.
Request:
Content-Type: application/octet-stream
Body: Raw FlatBuffer binary (TelemetryBatch schema)
Response (202 Accepted):
{
"accepted": 100,
"timestamp": 1710000002000,
"mode": "binary"
}
Error Responses:
| Status | Cause | Fix |
|---|---|---|
| 400 | Buffer too small (< 8 bytes) | Ensure payload meets minimum size |
| 501 | Native store not available | Run just build-native to compile libaacyn |
Performance: Binary ingestion sustains 5.09M events/sec with 16ms p99 latency. See
docs/binary-protocol.mdfor the FlatBuffer schema and payload generation.
POST /v1/events — Structured Telemetry EventsAccepts richly-typed telemetry events with metric, trace, and log payloads. This is the full-fidelity ingestion path for when you need more than RED metrics.
Request:
[
{
"id": "01234567-89ab-cdef-0123-456789abcdef",
"timestamp": 1710000000000000000,
"kind": "metric",
"service": "payment-api",
"host": "prod-01",
"tags": { "region": "us-east-1", "env": "production" },
"metric": {
"name": "request_duration",
"value": 42.5,
"unit": "ms",
"type": "histogram"
}
}
]
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique event ID (UUIDv7 recommended for time-sortability) |
timestamp | number | Yes | Unix epoch nanoseconds |
kind | "metric" | "trace" | "log" | Yes | Event classification |
service | string | Yes | Originating service name |
host | string | Yes | Host identifier |
tags | Record<string, string> | Yes | Arbitrary key-value tags for filtering |
metric | MetricPayload | — | Present when kind === "metric" |
trace | TracePayload | — | Present when kind === "trace" |
log | LogPayload | — | Present when kind === "log" |
MetricPayload:
| Field | Type | Description |
|---|---|---|
name | string | Metric name (e.g., request_duration) |
value | number | Metric value |
unit | string | Unit of measurement (e.g., ms, bytes, count) |
type | "gauge" | "counter" | "histogram" | Aggregation semantics |
TracePayload:
| Field | Type | Description |
|---|---|---|
traceId | string | Distributed trace ID |
spanId | string | Span ID within the trace |
parentSpanId | string? | Parent span (omit for root spans) |
operationName | string | Name of the operation |
duration | number | Duration in nanoseconds |
status | "ok" | "error" | Span outcome |
attributes | Record | Structured span attributes |
LogPayload:
| Field | Type | Description |
|---|---|---|
level | "debug" | "info" | "warn" | "error" | "fatal" | Log severity |
message | string | Log message |
attributes | Record | Structured log attributes |
Response (200):
{
"accepted": 1,
"timestamp": 1710000002000
}
POST /v1/query — SQL QueryExecutes a SQL query against the columnar store.
Request:
{
"sql": "SELECT service, count(*) FROM events WHERE isError = true GROUP BY service",
"timeRange": {
"startNs": 1710000000000000000,
"endNs": 1710086400000000000
},
"limit": 100
}
| Field | Type | Required | Description |
|---|---|---|---|
sql | string | Yes | SQL query string |
timeRange | object | — | Optional time range filter |
timeRange.startNs | number | — | Start of range (nanoseconds) |
timeRange.endNs | number | — | End of range (nanoseconds) |
limit | number | — | Maximum rows to return |
Response (200):
{
"columns": ["service", "count"],
"rows": [
["payment-api", 42],
["auth-service", 7]
],
"durationNs": 150000,
"totalRows": 2
}
| Field | Type | Description |
|---|---|---|
columns | string[] | Column names in the result set |
rows | unknown[][] | Row data matching the column order |
durationNs | number | Query execution time in nanoseconds |
totalRows | number | Total matching rows (before limit) |
GET /query/trace/:traceId — Trace LookupPerforms an O(1) hash lookup for a specific trace by ID.
Request: Pass the trace ID as a URL parameter.
GET /query/trace/abc-123
Response (200):
{
"traceId": "abc-123",
"service": "payment-api",
"durationMs": 42.5,
"isError": false,
"timestamp": 1710000000000
}
Response (404):
{
"error": "trace_not_found",
"traceId": "abc-123"
}
Why O(1)? The native store maintains a hash index on
traceIdfor instant lookups regardless of store size. This is separate from the columnar layout used for analytical scans.
POST /webhooks/stripeReceives Stripe webhook events for license lifecycle management. This endpoint is called by Stripe, not by your application.
Handled Events:
checkout.session.completed → Mints Ed25519 license with tier + expiry, emails to customercustomer.subscription.deleted → Cancels license (next renewal won't generate a new key)customer.subscription.updated → Handles tier changes and reactivationResponse: 200 OK or 400 Error
Security: In production, configure
STRIPE_WEBHOOK_SECRETto verify webhook signatures. Without it, any POST to this endpoint would be processed.
Every environment variable that aacyn reads, what it does, and what happens if you don't set it.
| Variable | Required | Default | Component |
|---|---|---|---|
PORT | No | 3001 | API Server |
AACYN_LICENSE_KEY | No | — | License |
AACYN_HEARTBEAT_URL | No | Production URL | License |
AACYN_BPF_OBJ | No | Auto-detected | eBPF |
LICENSE_SALT | Yes (prod) | aacyn-dev-salt | License |
SOVEREIGN_PRIVATE_KEY | No | Auto-generated | License |
STRIPE_SECRET_KEY | Yes (prod) | — | Billing |
STRIPE_WEBHOOK_SECRET | Yes (prod) | — | Billing |
STRIPE_SOVEREIGN_TIER_PRICE_ID | Yes (prod) | — | Billing |
RESEND_API_KEY | No | — | |
EMAIL_FROM | No | aacyn <license@aacyn.com> |
PORTThe TCP port the aacyn API server listens on.
| Default | 3001 |
| Example | PORT=8080 |
| If unset | Listens on port 3001 |
Tip: If you're running behind a reverse proxy (nginx, Caddy), keep the default and proxy from 443 → 3001.
AACYN_LICENSE_KEYThe Ed25519 signed license key your appliance uses for offline verification. You received this in your purchase email.
| Default | None (free tier) |
| Format | base64url(payload).base64url(signature) |
| Example | AACYN_LICENSE_KEY=eyJlIjoiLi4uIiwidGllciI6InBybyJ9.sig... |
| If unset | Server runs in free tier — dashboard, JSON ingestion, and Golden Signals enabled |
How it works: On startup, the appliance verifies the Ed25519 signature locally using the embedded public key. The signed payload contains your email, tier (free/pro/team/enterprise), and expiry date. Zero network requests. If the appliance has internet access, it optionally pings the heartbeat server to auto-renew the license before expiry.
AACYN_PUBLIC_KEYPEM-encoded Ed25519 public key used to verify license signatures locally.
| Default | None (Ed25519 verification disabled) |
| Format | PEM string |
| If unset | Falls back to heartbeat-based validation |
AACYN_HEARTBEAT_URLURL of the Cloudflare Worker that handles optional license auto-renewal.
| Default | https://aacyn-heartbeat.nmeshed-test.workers.dev |
| Example | AACYN_HEARTBEAT_URL=https://heartbeat.aacyn.com |
| If unset | Uses the default production URL |
Note: The heartbeat is optional. The Ed25519 license works fully offline. The heartbeat is a convenience for auto-renewing licenses before expiry — if the appliance can reach the internet, it will fetch a renewed license automatically.
LICENSE_SALTSecret salt used to derive heartbeat keys from Stripe customer IDs via SHA-256(salt + customerId).
| Default | aacyn-dev-salt (development only) |
| Format | Any string; 64 hex chars recommended for production |
| Example | LICENSE_SALT=9b4f5b5d2ec489d1... |
| If unset | Uses the dev salt — keys will not match production Worker |
[!CAUTION] This value must be identical between the API server and the Cloudflare Worker. If they differ, the server will derive different license keys than the Worker expects, and license validation will silently fail.
SOVEREIGN_PRIVATE_KEYBase64-encoded Ed25519 private key (PEM format) used to mint offline-verifiable license keys.
| Default | Auto-generates an ephemeral keypair on startup (development) |
| Format | Base64-encoded PEM |
| If unset | Generates a new keypair every restart — licenses from previous runs become unverifiable |
Generate for production:
node -e "const c=require('crypto'); const {privateKey}=c.generateKeyPairSync('ed25519'); console.log(privateKey.export({type:'pkcs8',format:'pem'}).toString('base64'))"
AACYN_BPF_OBJAbsolute path to the compiled eBPF probe object file.
| Default | <project_root>/build/aacyn_probes.bpf.o (auto-detected) |
| Example | AACYN_BPF_OBJ=/opt/aacyn/build/aacyn_probes.bpf.o |
| If unset | Searches for the BPF object relative to the project root |
| If file not found | eBPF is silently disabled; server runs normally |
Requirements for eBPF: Linux kernel 5.8+,
CAP_BPF(run as root), andlibbpf-devinstalled. Seedocs/ebpf.mdfor setup.
STRIPE_SECRET_KEYYour Stripe API secret key. Used server-side to create Checkout Sessions.
| Default | None |
| Format | sk_live_... (production) or sk_test_... (sandbox) |
| If unset | The /api/checkout endpoint returns HTTP 503 |
STRIPE_WEBHOOK_SECRETSigning secret for verifying Stripe webhook payloads. Prevents spoofed webhook events.
| Default | None |
| Format | whsec_... |
| If unset | Webhooks are processed without signature verification (unsafe in production) |
Where to find: Stripe Dashboard → Developers → Webhooks → your endpoint → Signing secret → Reveal
STRIPE_SOVEREIGN_TIER_PRICE_IDThe Stripe Price ID for the Pro tier subscription (formerly "Sovereign").
| Default | None |
| Format | price_... |
| If unset | Checkout sessions default to Pro tier |
STRIPE_PRO_PRICE_ID / STRIPE_TEAM_PRICE_ID / STRIPE_ENTERPRISE_PRICE_IDStripe Price IDs for each tier. Used by the webhook handler to determine which tier to embed in the Ed25519 license.
| Default | None |
| Format | price_... |
| If unset | All checkouts default to Pro tier |
Where to find: Stripe Dashboard → Products → aacyn [Tier Name] → Pricing → Price ID
RESEND_API_KEYAPI key for sending license delivery emails via Resend.
| Default | None |
| Format | re_... |
| If unset | License keys are logged to the server console instead of emailed |
Development: Leave unset. License keys will be printed to stdout, which is sufficient for testing.
EMAIL_FROMThe sender address for license emails.
| Default | aacyn <license@aacyn.com> |
| Example | EMAIL_FROM=aacyn <license@send.aacyn.com> |
| Constraint | Must match a verified domain in your Resend account |
aacyn uses three environment files, each for a different context:
.env.production → Real secrets for deployed appliance (NEVER committed)
.env.local → Local development defaults (gitignored)
.env.test → Deterministic values for bun test (committed)
Load with: source .env.production && bun run src/index.ts
Performance: 5.09M events/sec with 16ms p99 latency — 16× faster than JSON ingestion.
Use this protocol when JSON parsing overhead is unacceptable. The binary path achieves zero-parse, zero-copy ingestion by passing a raw memory pointer from Bun directly into the C columnar store via
bun:ffi.
JSON (/ingest/batch) | Binary (/ingest/binary) | |
|---|---|---|
| Throughput | ~314K events/sec | 5.09M events/sec |
| Latency (p99) | ~219ms | 16ms |
| Integration effort | Drop-in HTTP POST | Requires FlatBuffer tooling |
| Use case | Application telemetry | High-frequency infrastructure metrics, log pipelines |
Rule of thumb: If you're ingesting < 100K events/sec, use JSON. It's simpler and fast enough. Switch to binary when you're pushing the limits.
The binary protocol uses FlatBuffers for zero-copy serialization. Each payload is a TelemetryBatch containing an array of fixed-size event records.
Each event is exactly 16 bytes in the wire format:
Offset Size Type Field
────── ──── ────── ─────────────
0 8 u64 timestamp (Unix epoch ms)
8 4 f32 durationMs
12 2 u16 flags (bit 0 = isError)
14 2 u16 padding (alignment)
Why 16 bytes? Cache-line aligned. On modern CPUs, 16B events fit exactly 4 per cache line (64B), maximizing L1/L2 cache efficiency during SIMD scans.
Offset Size Type Field
────── ──── ────── ─────────────
0 4 u32 magic (0xAACE0001)
4 4 u32 event_count
8 N×16 bytes event_records[]
Total payload size: 8 + (event_count × 16) bytes
cd /opt/aacyn
bun run benchmarks/generate_payload.ts
Expected output:
FlatBuffer Payload Generated
Events: 100
Bytes: 1656 (1.62 KB)
Output: benchmarks/payload.bin
curl -X POST http://localhost:3001/ingest/binary \
-H "Content-Type: application/octet-stream" \
--data-binary @benchmarks/payload.bin
Expected response (202):
{
"accepted": 100,
"timestamp": 1710000002000,
"mode": "binary"
}
What if it fails?
400 Buffer too small→ Payload must be ≥ 8 bytes (header only = 8 bytes)501 Binary ingestion requires native store→ Runjust build-nativefirst
function buildBinaryPayload(events: Array<{timestamp: number, durationMs: number, isError: boolean}>): Buffer {
const MAGIC = 0xAACE0001;
const headerSize = 8;
const eventSize = 16;
const buf = Buffer.alloc(headerSize + events.length * eventSize);
// Header
buf.writeUInt32LE(MAGIC, 0);
buf.writeUInt32LE(events.length, 4);
// Events
for (let i = 0; i < events.length; i++) {
const offset = headerSize + i * eventSize;
buf.writeBigUInt64LE(BigInt(events[i].timestamp), offset);
buf.writeFloatLE(events[i].durationMs, offset + 8);
buf.writeUInt16LE(events[i].isError ? 1 : 0, offset + 12);
buf.writeUInt16LE(0, offset + 14); // padding
}
return buf;
}
// Usage
const payload = buildBinaryPayload([
{ timestamp: Date.now(), durationMs: 42.5, isError: false },
{ timestamp: Date.now(), durationMs: 1250, isError: true },
]);
await fetch("http://localhost:3001/ingest/binary", {
method: "POST",
headers: { "Content-Type": "application/octet-stream" },
body: payload,
});
import struct
import requests
def build_binary_payload(events):
"""Build an aacyn binary payload from a list of event dicts."""
MAGIC = 0xAACE0001
header = struct.pack("<II", MAGIC, len(events))
body = b"".join(
struct.pack("<QfHH",
event["timestamp"],
event["duration_ms"],
1 if event.get("is_error") else 0,
0 # padding
)
for event in events
)
return header + body
payload = build_binary_payload([
{"timestamp": 1710000000000, "duration_ms": 42.5, "is_error": False},
{"timestamp": 1710000001000, "duration_ms": 1250.0, "is_error": True},
])
requests.post(
"http://localhost:3001/ingest/binary",
data=payload,
headers={"Content-Type": "application/octet-stream"},
)
Key insight: There is no serialization/deserialization step. The raw bytes from the HTTP request body are passed as a pointer directly to the C function, which copies them into the mmap'd columnar store. This is why binary ingestion is 16× faster than JSON.
Zero-instrumentation kernel telemetry. eBPF probes intercept network syscalls at the kernel level and pipe events directly into the columnar store — no application changes required.
This is an optional advanced feature. The server works fully without eBPF.
| Probe | Kernel Attachment | What It Records |
|---|---|---|
trace_connect_enter | tracepoint/syscalls/sys_enter_connect | Outbound TCP connections (dest IP, port, process), stashes fd + timestamp for exit |
trace_connect_exit | tracepoint/syscalls/sys_exit_connect | Connection latency, source IP via fd→socket→sock→skc_rcv_saddr CO-RE walk |
trace_tcp_sendmsg | kprobe/tcp_sendmsg | Bytes sent, source + dest IP from struct sock * param |
aacyn_auto (accept4) | tracepoint/syscalls/sys_enter_accept4 | Inbound connections — service auto-discovery |
Translation: Every time any process on the machine opens a network connection or sends data, aacyn records it with nanosecond precision — including the process's container IP for topology graph merging — without touching application code.
| Feature | V1 | V2 |
|---|---|---|
| Ring Buffers | 1 × 256KB events_ringbuf | 2: standard_events (256KB) + critical_errors (64KB) |
| Priority Routing | None — errors mixed with telemetry | emit_event(is_critical) routes by severity |
| Backpressure | Silent drops, invisible | Per-CPU atomic drop_counters, surfaced in API + HUD |
| Source IP | Not tracked | skc_rcv_saddr via CO-RE socket introspection |
| Topology Merge | 3 disconnected subgraphs | IP-correlated connected graph |
network_event)struct network_event {
__u64 timestamp_ns; /* bpf_ktime_get_ns() monotonic clock */
__u32 pid; /* Process ID */
__u32 tgid; /* Thread Group ID */
__u32 dest_ip; /* Destination IPv4 (network byte order) */
__u32 source_ip; /* Source IPv4 — container identity */
__u16 dest_port; /* Destination port (network byte order) */
__u16 status; /* 0=connect, 1=connected, 2=send, 3=connect_failed */
__u64 bytes; /* Bytes sent, or duration_ns on connect-exit */
char comm[16]; /* Process name (TASK_COMM_LEN) */
} __attribute__((packed));
| Map | Type | Size | Purpose |
|---|---|---|---|
standard_events | RINGBUF | 256KB | High-volume telemetry (connects, sends) |
critical_errors | RINGBUF | 64KB | Failed connects (non-zero, non-EINPROGRESS retval) |
drop_counters | PERCPU_ARRAY | 2 keys × N CPUs | Index 0: standard drops, Index 1: critical drops |
connect_state | HASH | 65536 entries | Per-connect state: stashes fd + timestamp + dest for exit handler |
In Docker, source_comm (connect-side) and portNames (accept-side) produce different node IDs for the same container — creating disconnected topology subgraphs.
trace_connect_exitcurrent → task_struct.files → files_struct.fdt → fdtable.fd[saved_fd]
→ file.private_data → socket.sk → sock.__sk_common.skc_rcv_saddr
This reads the socket's local IPv4 address after connect() completes (when the address is bound). Each Docker container has a unique bridge IP, giving a stable node identity.
// Build ip→comm from all edges where source_ip is nonzero
const ipToSource = new Map<string, string>();
for (const edge of edges)
if (edge.sourceIp !== "0.0.0.0")
ipToSource.set(edge.sourceIp, edge.source);
// Rename targets: dest_ip → resolved source_comm
for (const edge of edges) {
const resolved = ipToSource.get(edge.destIp);
if (resolved) edge.target = resolved;
}
Result: nginx → api (node) becomes nginx → node because edge "node" reports source_ip=172.18.0.3, and the nginx edge targets dest_ip=172.18.0.3.
| Requirement | How to Check | How to Install |
|---|---|---|
| Linux kernel 5.8+ | uname -r | Upgrade kernel or OS |
CONFIG_BPF=y | zgrep CONFIG_BPF /proc/config.gz | Rebuild kernel (most distros ship with BPF) |
| BTF (BPF Type Format) | ls /sys/kernel/btf/vmlinux | Install linux-headers-$(uname -r) |
| clang | clang --version | apt install clang llvm |
| libbpf-dev | dpkg -l libbpf-dev | apt install libbpf-dev libelf-dev |
| root or CAP_BPF | whoami | Run as root or setcap cap_bpf+ep |
# Install all prerequisites on Ubuntu/Debian
sudo apt update && sudo apt install -y clang llvm libbpf-dev libelf-dev linux-headers-$(uname -r)
cd native
make clean && make EBPF=1
Expected output:
✓ BPF object compiled: ../build/aacyn_probes.bpf.o
cc ... -o ../build/libaacyn.so libaacyn.c -lbpf -lelf -lz
[!CAUTION] Linker ordering matters.
-lbpf -lelf -lzmust appear AFTER the source file. The Makefile handles this viaLDLIBS.
cd ts/apps/api
sudo bun run src/index.ts
Expected log:
[libaacyn] V2 eBPF probes attached: /opt/aacyn/build/aacyn_probes.bpf.o
standard_events (256KB) + critical_errors (64KB) + drop_counters (Per-CPU)
eBPF probes run in kernel space and add negligible overhead:
| Metric | Without eBPF | With eBPF | Delta |
|---|---|---|---|
| Binary ingestion (evt/sec) | 5,089,364 | 4,882,072 | -4% (noise) |
| p95 latency | 12.73ms | 12.67ms | No impact |
| p99 latency | 16.12ms | 15.78ms | No impact |
Conclusion: eBPF probes are free. The 4% variance is within run-to-run noise.
make clean && make # builds without EBPF=1, uses stub functions
The server logs [eBPF] No BPF object found and continues normally.
| Channel | Contact | Use For |
|---|---|---|
| Technical support | support@aacyn.com | Appliance setup, configuration, troubleshooting |
| Security | security@aacyn.com | Vulnerability reports, security concerns |
| Feedback | feedback@aacyn.com | Feature requests, bug reports, general feedback |
| Billing | Stripe Customer Portal | Manage subscription, invoices, payment methods |
Many issues can be resolved with a quick check:
# Check if the process is running
pgrep -a bun
# Check the health endpoint
curl -s http://localhost:3001/health | jq .
If the health check returns Connection refused, the server isn't running. Restart it:
cd /opt/aacyn/ts/apps/api
bun run src/index.ts
# Send a test event
curl -X POST http://localhost:3001/ingest/batch \
-H "Content-Type: application/json" \
-d '{"events":[{"traceId":"test","service":"diag","durationMs":1,"isError":false,"timestamp":'$(date +%s000)'}]}'
| Response | Meaning |
|---|---|
202 {"accepted":1} | Ingestion is working — check your application code |
422 | Request body doesn't match the event schema |
500 | Internal error — the store may be full; restart to clear |
curl -s http://localhost:3001/v1/license/status | jq .
| Status | Meaning | Action |
|---|---|---|
ed25519_verified | License verified offline | Issue is elsewhere |
no_license | Key not set | Set AACYN_LICENSE_KEY environment variable |
heartbeat_grace | Heartbeat auto-renewal unreachable | No action needed — license works offline until expiry |
expired | License past expiry date | Check email for renewed key, or reactivate at aacyn.com |
We take security seriously. If you discover a vulnerability:
The aacyn appliance runs entirely on your hardware. There are no mandatory external dependencies.
aacyn-heartbeat.nmeshed-test.workers.dev checks for a renewed license key. If unreachable, the appliance continues operating normally with the existing license until its expiry date.