Architecture
Understanding how Atheon differs from traditional analytics tools is key to unlocking its full potential.
The Blind Spot of Traditional Analytics
Standard analytics tools (like Google Analytics or Mixpanel) track clicks and pageviews from the user's browser. They were built for static websites, which means:
- They have zero visibility into the actual conversational intent of your users.
- They cannot track backend AI Agent performance (latency, token usage, tool calls, success rates).
- Attempting to scrape DOM elements to understand AI responses risks exposing raw PII and sensitive chat logs.
The Atheon Solution: Server-Side Decisioning
Atheon splits the responsibility between your server and the client to provide deep insights without interfering with live requests.
flowchart TD
User[User] -->|Types Query| Input[atheon-input]
Input -->|User Query| Backend[Backend]
Backend -->|provider, model, input, output, tokens| AtheonSDK[Atheon Codex SDK]
AtheonSDK -->|Intent Fingerprinting & Agent Analytics| Dashboard[Atheon Gateway]
Backend -->|interaction_id, prompt_hash, fingerprint| Output[atheon-output]
Output -->|Drop-offs, Funnels & Journey Tracking| Dashboard
1. The Logic Layer (Server-Side)
When an LLM interaction occurs, your backend instruments it using the Codex SDK. Events are enqueued immediately and flushed to the Gateway in a background thread/process, so your response time is never affected.
- Fire-and-forget tracking:
atheon.track()enqueues a completed interaction in microseconds. Useatheon.begin()/interaction.finish()for streaming or multi-turn flows where latency is measured automatically. - Tool & Agent tracking: The
toolandagentdecorators/wrappers hook automatically into the active interaction via Python'sContextVaror Node'sAsyncLocalStorage; no manual plumbing needed throughout your deep call stacks. - Intent Fingerprinting: The Gateway analyzes the interaction and extracts rich metadata (Persona, Context, Problem), building your Knowledge Graph without storing raw, sensitive logs.
2. The Presentation Layer (Client-Side)
Your frontend is instrumented using two declarative Web Components:
<atheon-input>: Wraps your user prompt field. It captures upstream intent, typing hesitation, and prompt abandonment before a request is even sent.<atheon-output>: Wraps the rendered LLM response. By passing theinteraction-id,prompt-hash, and cryptographicfingerprintreturned by your backend SDK, Atheon securely attributes viewability, copy-events, and agent outcomes back to the originating prompt.
Your backend returns the interaction_id alongside the LLM response. The frontend passes it to the <atheon-container> web component — there is no heavy analytics script blocking the thread.
- Business Intelligence: By wrapping your UI in
<atheon-container interaction-id="...">, Atheon securely tracks user journey funnels, drop-off reasons, and actual viewability. It bridges the gap between what the backend generated and how the user actually engaged with it.
Why this matters
- Privacy First: By using metadata and Intent Fingerprinting, you understand your users perfectly without ever exposing sensitive chat histories.
- Non-blocking: The background queue means analytics never add latency to your API responses.
- Full Context: Using Atheon allows you to securely tracks user journey funnels, drop-off reasons, and actual viewability. It bridges the gap between what the backend generated and how the user actually engaged with it giving you end-to-end user-AI journey for a given interaction.