Skip to content

Server-Side Integration

The Logic Layer is where Atheon's decision engine lives. Unlike traditional ad networks that make decisions in the user's browser (slowing it down), Atheon operates entirely on your server.

This ensures Zero Layout Shift and protects user privacy by scrubbing PII before it leaves your infrastructure.

1. Install the SDK

Choose the SDK for your backend environment.

npm install @atheon-inc/codex
pip install atheon-codex
dart pub add atheon_codex

2. Initialize Client

Initialize the client with your Project API Key. We recommend storing this in your environment variables.

import { AtheonCodexClient } from '@atheon-inc/codex';

const client = new AtheonCodexClient({
    apiKey: process.env.ATHEON_CODEX_API_KEY,
});
import os
from atheon_codex import AtheonCodexClient

# Initialize with your API Key
client = AtheonCodexClient(
    api_key=os.environ.get("ATHEON_CODEX_API_KEY")
)
import os
from atheon_codex import AsyncAtheonCodexClient

# Initialize with your API Key
async_client = AsyncAtheonCodexClient(
    api_key=os.environ.get("ATHEON_CODEX_API_KEY")
)
import "dart:io";
import "package:atheon_codex/codex.dart";

final client = AtheonCodexClient(
    AtheonCodexClientOptions(
        apiKey: Platform.environment['ATHEON_CODEX_API_KEY'] ?? '',
    ),
);

3. Fetch & Integrate

This is the core loop. You send the User Query (intent) and the LLM Response (context) to Atheon. We return the integrated text and the tracking configurations.

Crucial Step: You must pass the integration_configs to your frontend to enable analytics and revenue tracking.

// Inside your API route that make the call to your LLM
const result = await client.fetchAndIntegrateAtheonUnit({
  query: userPrompt,
  baseContent: llmResponse,
});

// Return this object to your frontend
return {
  content: result.response_data?.integrated_content ?? llmResponse,
  tracking: result.response_data?.integration_configs
};
# Inside your route handler that make the call to your LLM
payload = AtheonUnitFetchAndIntegrateModel(
    query=user_prompt,
    base_content=llm_response
)

result = client.fetch_and_integrate_atheon_unit(payload)

return {
    "content": result.get("response_data", {}).get("integrated_content"),
    "tracking": result.get("response_data", {}).get("integration_configs")
}
# Inside your route handler that make the call to your LLM
payload = AtheonUnitFetchAndIntegrateModel(
    query=user_prompt,
    base_content=llm_response
)

result = await async_client.fetch_and_integrate_atheon_unit(payload)

return {
    "content": result.get("response_data", {}).get("integrated_content"),
    "tracking": result.get("response_data", {}).get("integration_configs")
}
// Inside the method that make the call to your LLM
final payload = AtheonUnitFetchAndIntegrateModel(
  query: userPrompt,
  baseContent: llmResponse,
);

final result = await client.fetchAndIntegrateAtheonUnit(payload);

return {
  'content': result?['response_data']['integrated_content'],
  'tracking': result?['response_data']['integration_configs'],
};