The Event Infrastructure Manifesto
Determinism over decoration. Enforcement over hope.
01 — What We Observe
Modern digital businesses run on events.
Product viewed. Application submitted. Subscription created. Checkout completed. Every meaningful interaction between a user and a system produces an event. These events feed dashboards, shape forecasts, trigger automations, inform product decisions, and, in regulated environments, serve as evidence of what happened, when, and under what conditions.
Events are not peripheral data. They are the semantic foundation of digital decision-making.
And yet, the infrastructure beneath them is remarkably fragile.
In most organizations, business events are inconsistently named. They are rarely versioned. They fail silently. They carry implicit assumptions about consent that no machine can verify. They change shape between environments. The browser emits one structure, the server emits another, and the warehouse inherits both contradictions without knowing it.
Analytics tools consume these events. Tag managers route them. CDPs attempt to centralize them. Warehouses store them. BI tools visualize them.
But nothing in this chain guarantees their structural integrity at the point where they are created.
This is not a tooling failure. No single vendor is responsible for this gap. It is an architectural absence: a layer that should exist between semantic intention and vendor execution, but simply does not.
02 — The Category Mistake
It helps to understand how we arrived here.
Modern analytics evolved from pageviews. The original model was built to count impressions, attribute traffic sources, and optimize marketing campaigns. It was never designed to model business decisions across distributed systems, regulated industries, and multi-environment architectures.
But that is precisely what we asked it to do.
As product complexity grew, as architectures became distributed, as server-side workflows replaced simple browser scripts, we extended a paradigm that was built for marketing measurement into a domain that demands engineering discipline. We added more tools, more dashboards, more governance layers, more approval workflows.
What we did not add was structural enforcement.
The result is a stack that conflates four fundamentally different responsibilities into one execution surface: semantic definition (what happened?), contract enforcement (is this event structurally valid?), routing (where does it go?), and consumption (how is it queried?). These are distinct concerns. They require different guarantees. Collapsing them produces a system where coordination exists, but contracts do not.
We treated business events as if they were upgraded pageviews. They are not. They are closer to API calls: structured, versioned, contract-bound communications between systems. Treating them otherwise was a category mistake, and it is one that compounds quietly until the consequences become visible in boardrooms, audits, and product decisions based on data no one can fully trust.
03 — The Cost of Invisible Drift
Analytics culture has normalized approximation.
“Tracking is never 100% accurate.” “Numbers will always differ slightly.” “That’s just how attribution works.” These phrases are spoken so frequently that they have become a form of professional resignation. Small inaccuracies become tolerable. Gaps are explained away. Divergences between tools are accepted as the cost of doing business.
This works. Until decisions compound on top of those inaccuracies.
When revenue forecasts rest on event streams that silently changed shape three sprints ago, when hiring plans are informed by conversion metrics that no one can trace back to a versioned definition, when a compliance audit asks for evidence that consent was respected at the moment of data emission and the only answer is “we had a banner,” then approximation stops being a pragmatic trade-off. It becomes systemic risk.
The failure mode is not that a dashboard breaks. Dashboards almost never break. The failure mode is that dashboards continue to render confidently while the semantic ground beneath them shifts. Silent drift is the most expensive kind, because it erodes trust gradually and reveals itself only when a decision has already been made on faulty ground.
04 — What Is Missing
Between governance documents and analytics vendors, there is a missing architectural layer.
Governance tools define what should be tracked: tracking plans, naming conventions, approval workflows. Vendors handle what happens after an event arrives. Storage, aggregation, visualization, activation. But no layer in the current stack enforces that what was defined is what actually gets emitted, structurally, at runtime, every time.
Tracking plans are documentation. They describe intent. They do not prevent violation. A spreadsheet that specifies “checkout_completed requires a numeric order_value field” cannot stop a developer from shipping a string in its place. An approval workflow cannot detect that an event was silently renamed in a tag manager last Tuesday. A schema document cannot guarantee that the browser and the server produce structurally identical payloads for the same business moment.
Documentation describes contracts. Only infrastructure can enforce them.
This is the layer we call Event Infrastructure: a deterministic runtime that sits between semantic definition and vendor routing, ensuring that every event conforms to its registered structure before any downstream system ever sees it.
05 — Our Principles
Event Infrastructure is defined by a set of commitments. These are not features to be prioritized or deferred. They are the properties that distinguish infrastructure from instrumentation.
Events are APIs.
If an organization versions its REST endpoints, requires schema validation on its GraphQL queries, and would never accept an API that silently changes its response shape between deployments, then it should extend the same discipline to its business events. Events are structured communications between systems. They carry semantic meaning. They inform decisions. They deserve contracts.
Versioning is mandatory.
Every event carries an explicit semantic version. Breaking changes require a major version increment. Multiple major versions may coexist in the same system. In-place mutation — whether renaming a field, changing a type, or removing a property without a version bump — is a violation. Business data deserves the same change discipline as code.
Validation happens before routing.
An event is validated against its registered definition before any adapter, any vendor SDK, any downstream system receives it. Invalid events are dropped or warned, deterministically. No silent corruption. No partial payloads reaching production dashboards. The moment of truth — the moment where an event is created — is where enforcement must live.
Consent is structural, not decorative.
Consent is not a banner. It is not a UI state. It is a data property, embedded in the event payload at the moment of emission. An adapter that requires consent cannot receive an event where consent was not granted. This is not a policy aspiration. It is a runtime invariant. If consent state is not machine-readable in the payload itself, compliance exists in policy documents but not in execution.
Infrastructure is vendor-neutral.
Analytics tools change. Vendors are acquired, deprecated, replaced. Canonical events must outlive any single tool. The event definition belongs to the organization, not to the vendor that happens to consume it. Infrastructure shapes the data. Vendors adapt to the shape.
Standards must be open.
The canonical specification and the enforcement runtime must be transparent and inspectable. Trust cannot be built on closed systems. The standard belongs to the ecosystem. Operations, hosting, governance, and observability may scale commercially. But the structural foundation must remain open.
06 — How It Works
Event Infrastructure introduces an explicit, deterministic pipeline between the application layer and vendor tooling.
When track() is called, the runtime does not simply fire a payload into the void. It resolves the event’s registered definition. It embeds the current consent state. It assembles the contextual invariants (environment, session, identity) into a canonical envelope. It validates the assembled event against its schema. It applies middleware. Only then does it dispatch to adapters.
Each step is explicit. Each invariant is enforced. No vendor SDK executes before structural validation is complete.
The canonical event — what we call the DK Canonical Envelope — is the same regardless of whether it was produced in a browser, on a server, or at the edge. Given identical inputs, the output is structurally identical across environments. Only transport differs. This is what client/server equivalence means: not that the same code runs everywhere, but that the same semantic structure is guaranteed everywhere.
This is not a wrapper around existing tools. It is a layer that guarantees structure before those tools ever receive data.
07 — The Maturity Perspective
Tracking maturity is not binary. It evolves in stages, and most organizations are further from the end state than they realize.
At Level 1, events are ad-hoc snippets. Copy-pasted code, hardcoded strings, no shared vocabulary. At Level 2, a centralized tag manager provides routing control, but no semantic enforcement. At Level 3, tracking plans exist as spreadsheets or documentation. Intent is described, but not enforced at runtime. At Level 4, event schemas are versioned and treated as first-class artifacts. At Level 5, the organization operates observable event infrastructure: events are versioned, validated at emission, consent-native, structurally equivalent across environments, and observable in real time.
Most organizations operate somewhere between Level 2 and Level 3. They have governance documents. They have tools. They may even have team processes for reviewing tracking changes. But they do not have runtime enforcement. They do not have version discipline. They cannot answer the question: “How many of our events violated their contract last week?”
Level 5 is not aspirational. It is inevitable. The question is how much silent drift an organization is willing to tolerate before it gets there.
08 — Why Now
Three structural forces are converging to make this correction necessary.
First, architectures have become distributed. Events originate in browsers, servers, edge functions, microservices, mobile clients. Without a canonical runtime that guarantees structural consistency across these environments, semantic divergence is not a risk. It is a certainty.
Second, privacy regulation has moved beyond banners. GDPR, ePrivacy, and emerging frameworks worldwide demand that consent is provable, auditable, and embedded in the data itself. A consent banner that runs in the UI but has no structural relationship to the event payload is theater. Regulation is moving toward enforcement. And so must infrastructure.
Third, executive decision-making has become analytically dependent. Product strategy, revenue forecasting, hiring plans, market positioning: all of it rests on behavioral metrics derived from event streams. When those streams are structurally ungoverned, the confidence placed in them is misplaced. Not because people are careless, but because the infrastructure to guarantee correctness simply did not exist.
Every mature software domain eventually separates implicit coordination from explicit enforcement. APIs gained gateways. Deployments gained CI/CD pipelines. Logs gained observability platforms. Each of these transitions happened when the cost of implicit coordination exceeded the cost of building infrastructure.
Analytics is reaching that threshold.
09 — What We Commit To
We commit to maintaining an open canonical event specification. Not as a product artifact, but as a shared standard that any organization can adopt, inspect, and contribute to.
We commit to building a transparent, inspectable runtime: one where every step of the enforcement pipeline is visible, testable, and deterministic.
We commit to treating events as versioned APIs. With the same discipline, the same change management, and the same structural guarantees that the engineering community has come to expect from any well-governed interface.
We commit to prioritizing integrity over convenience. Infrastructure is not always the easiest path. But it is the one that compounds in the right direction.
We commit to separating infrastructure from vendors. The event definition is the organization’s asset. The runtime enforces it. Vendors receive what the infrastructure produces. That order matters.
10 — The Position
Dashboards are built on events. If events drift, decisions drift. This is not a theoretical concern. It is the lived reality of every data team that has spent a Monday morning explaining why a KPI shifted for reasons no one can trace to a deliberate change.
Instrumentation collects. Infrastructure guarantees.
The distinction matters. Instrumentation is the act of emitting data. Infrastructure is the system that ensures the data conforms to its contract before it reaches anything downstream. One is a capability. The other is a guarantee. The analytics industry has been building increasingly sophisticated capabilities on top of an absence of guarantees.
That absence is what Event Infrastructure corrects.
Not by replacing existing tools. Not by adding another dashboard. Not by promising a better way to visualize data. But by introducing the enforcement layer that should have existed all along: deterministic, versioned, consent-native, vendor-neutral, and open.
Event Infrastructure is not an innovation. It is a correction. One that every sufficiently complex analytics stack will eventually require.
The question is not whether this layer will exist.
The question is who defines the standard.
Determinism over decoration.
Enforcement over hope.
— deklatrak