ComputeIntegrity.com
Concept Note (EN)

Compute Integrity as Governance Language

This note defines compute integrity as board-readable language for a foundational requirement: the ability to demonstrate that a computing environment is running the intended software on a trustworthy platform, without unauthorized modification, and that this state can be attested in a way that relying parties can verify.

Descriptive onlyNo servicesNon-affiliated

1. Definition and scope

Compute integrity is not a single product and not a vendor claim. It is a category that becomes meaningful when integrity evidence can be expressed, verified, and referenced in governance and procurement language.

In plain terms, compute integrity answers a simple question: can the system prove it is running what it should be running, on a platform whose trust anchors are understood, and that it has not been silently altered.

2. Where the category shows up (descriptive reality)

Important: This site does not provide operational guidance, configurations, or exploitation-oriented detail. The goal is a neutral, defensible framing of the category, not a how-to manual.

3. Why it matters now

Three converging forces move compute integrity from engineering concerns to board and procurement language. First, resilience expectations shift the question from “is it secure” to “can it prove, continuously and verifiably, that it is not compromised.” Second, European cyber and product-security trajectories increasingly reward measurable and attestable assurance properties over time. Third, AI deployment at scale expands the trust problem: even if an output is signed or logged, stakeholders still ask whether the environment that produced it was trustworthy.

4. A neutral framework (five dimensions)

A practical way to keep the category legible, without turning it into a product pitch, is to structure it as a simple five-dimension framework. These dimensions are descriptive lenses. They do not imply a certification.

5. Why it will matter for AI systems

AI trust debates often focus on outputs, logs, and provenance. Compute integrity is the foundation beneath those layers. If an organisation requires trustworthy automated decisions, it eventually needs credible assumptions about the execution environment that produced those decisions. Integrity evidence makes higher-level guarantees more meaningful, without claiming to “solve” AI risk.

6. Guardrails for safe stewardship

7. References (starting points)

IETF RATS: Working Group · RFC 9334 · RFC 9335
Trusted Computing Group: TCG
NIST Platform Firmware Resiliency: SP 800-193
EU context: Cyber Resilience Act (European Commission), ENISA overview