Deployment status:

Aurora is available today as a self-managed deployment that lives entirely inside your environment (cloud or on-prem). A managed service is planned, but not yet available.

Self-managed Aurora deployment

Aurora runs entirely inside your AWS, GCP, or Azure environment. Key integrations include a browser extension for endpoint telemetry or network log ingestion from gateways, or proxies to discover unmanaged accounts. Data residency is guaranteed with zero exfiltration—no raw logs or identity data leave your cloud.

Privacy, residency, and deployment

  • 100% customer-side processing: No raw logs or identity data leave your cloud.
  • Configurable retention: 30/90-day data retention controls.
  • Deployment options: Self-managed stack today, with a marketplace-managed service planned—all keep data in the same region as your workloads.

Aurora does not connect to IDPs or HR systems; masked identities (e.g., UUIDs) are sufficient to detect and map risk, supporting Aurora’s privacy-first design.

Once you have Aurora running

Customers receive a Docker Compose bundle with preconfigured images for the ingest service, ClickHouse, and the browser extension. Build and deploy the stack (locally) with docker compose up --build to mirror production flows end-to-end.

Evaluation workflow

  1. Establish a clean baseline
    • Start the stack and confirm the ingest service logs web server listening on :8080.
    • Verify ClickHouse is reachable with curl http://server:8123/ping.
  2. Extension behavior
    • Trigger a few browser navigations and inspect network traffic in DevTools to confirm POST requests to /ingest include browser_id, method, url, and timestamp fields.
    • Confirm the extension only targets the configured ingest origin (http://server:8080/ingest) and that the browser_id remains stable across page loads.
  3. Ingest API constraints
    • Send a non-POST request to /ingest and ensure the service responds with HTTP 405 and does not forward data to storage.
    • Submit a malformed JSON body and expect a 502 Bad Gateway response from the ClickHouse insert path, with no new rows persisted.
  4. Storage integrity
    • Query the events table for recent entries to ensure payload fields map cleanly to browser_id, method, url, and timestamp columns.
    • Validate that multiple events from the same browser arrive in order when sorted by (browser_id, timestamp).
  5. Reporting
    • Capture sample requests and responses for /ingest, along with ClickHouse query outputs, to anchor any findings.
    • File issues with reproducible steps, affected endpoints, and suggested mitigations.

Review checklist

  • Extension only posts to the expected ingest origin and uses HTTPS where applicable in production.
  • browser_id is generated once and stored persistently; no sensitive data is logged or leaked.
  • /ingest rejects non-POST methods and surfaces clear errors for malformed inputs.
  • ClickHouse credentials (if configured) are required for inserts and are never hard-coded.
  • Traffic records in the events table match captured requests exactly and include millisecond precision timestamps.
  • Logging avoids request bodies or secrets, focusing on operational signals.

Managed service (coming soon)

A fully managed deployment delivered through cloud marketplaces is on the roadmap. When available, Aurora will handle control-plane updates while you retain full control of data residency.

  • Marketplace-based procurement.
  • Control-plane patches, scaling, and monitoring handled by Aurora.
  • Data processing, logs, and identity signals remain inside your VPC with zero exfiltration.
  • Ideal for teams that want rapid time-to-value with minimal operational overhead.
  • A SaaS offering coming very soon as well.