KLYNX GOVERNANCE STANDARD · VERSION 1.0 · MARCH 2026

Klynx Governance Standard
KGS-1.0

A specification for enterprise AI governance infrastructure — defining how AI signals are validated, how inference is gated, and how every decision is audited for regulatory compliance.

EU AI Act alignedSOX compliantHIPAA compatibleGDPR readyOpen standard
§1

Overview

KGS-1.0 defines the architecture, interfaces, and enforcement model for AI governance in enterprise environments. It establishes how AI systems must validate input signals, gate inference execution, evaluate output risk, and record immutable audit lineage.

The standard is designed to align with the EU Artificial Intelligence Act (EU AI Act 2024), SOX AI disclosure requirements, HIPAA AI provisions, and GDPR Article 22 automated decision-making provisions.

Core Principle: Fail-Closed by Default

Every AI action is blocked unless explicitly permitted through the governance pipeline. A system compliant with KGS-1.0 cannot execute inference on degraded, incomplete, or high-risk signals.

§2

Three-Layer Architecture

KGS-1.0 defines a mandatory three-layer governance pipeline. All layers must execute sequentially. A failure at any layer terminates the pipeline.

L1

Signal Integrity Validation

Validates completeness, confidence, and schema of upstream signals

L2

Inference Gate

Binary gate that permits or suppresses model invocation based on L1 result

L3

Governance Evaluation

Policy enforcement, risk scoring, and audit trail generation

§3

Layer 1 — Signal Integrity Validation

All signals entering the governance pipeline must pass integrity validation before inference is permitted. A signal is a structured data payload representing the context for an AI workload.

Required Signal Fields

KGS-S01

inference_confidence_score

Float 0.0–1.0. Confidence that the input data is fit for inference. Threshold: ≥ 0.70

required
KGS-S02

channel_coverage_ratio

Float 0.0–1.0. Coverage of data channels in the signal. Threshold: ≥ 0.60

required
KGS-S03

event_capture_ratio

Float 0.0–1.0. Ratio of events captured vs expected. Threshold: ≥ 0.50

required
KGS-S04

signal_confidence_score

Float 0.0–1.0. Composite signal quality score. Threshold: ≥ 0.65

required
KGS-S05

signal_coverage_ratio

Float 0.0–1.0. Spatial/temporal coverage completeness.

required
KGS-S06

modality_shift_flag

Boolean. True if a significant modality shift was detected. Triggers anomaly review.

required
KGS-S07

modality_distribution_ratio

Float 0.0–1.0. Distribution balance across modalities.

required
KGS-S08

project_intensity_index

Float 0.0–1.0. Operational intensity metric for the source project.

required

KGS-RULE-001: Any signal missing one or more required fields MUST be quarantined. The L2 gate MUST be set to CLOSED. Inference MUST NOT proceed.

§4

Layer 2 — Inference Gate

The Inference Gate is a binary enforcement point. It evaluates the L1 result and either opens the gate (permits inference) or closes it (suppresses inference). The gate is fail-closed by default.

GATE: OPEN (layer2_gate = true)

  • All required fields present
  • All thresholds met
  • No anomaly flags
  • composite confidence ≥ 0.70

GATE: CLOSED (layer2_gate = false)

  • One or more fields missing
  • Threshold failure on critical field
  • Anomaly flag raised
  • Validation status = quarantined

KGS-RULE-002: The L2 gate result MUST be recorded in the lineage chain before any model is invoked. A closed gate is immutable — it cannot be overridden by downstream systems.

§5

Layer 3 — Governance Evaluation

Layer 3 evaluates the workload against the active policy set, computes a composite risk score, and records a GovernanceDecision. It runs only when the L2 gate is OPEN.

KGS-G01

GovernanceDecision

Every workload must produce exactly one GovernanceDecision object with decision_id, risk_score, decision_status, and audit_ref.

mandatory
KGS-G02

risk_score

Composite risk score. Weighted sum of task_sensitivity, data_sensitivity, compliance_impact, and signal_confidence.

0–100
KGS-G03

decision_status

APPROVED | BLOCKED | REQUIRES_APPROVAL | DEGRADED_ALLOWED. BLOCKED and REQUIRES_APPROVAL must prevent execution.

enum
KGS-G04

audit_ref

Cryptographic reference tying the GovernanceDecision to the lineage chain entry. Format: AUD-{hex10}.

required
KGS-G05

policy_evaluations

Array of PolicyEvaluation records — one per policy checked. Result: pass | fail | skip.

required
§6

Built-in Policy Set

KGS-1.0 defines six mandatory baseline policies. Implementations may add domain-specific policies but may not remove baseline policies.

POL-001Signal Quality GateEU AI Act Art. 9

Blocks inference when signal confidence falls below the minimum threshold, regardless of other parameters.

Trigger

composite confidence < 0.70 OR validation_status = quarantined

Enforcement Action

BLOCKED — inference suppressed

POL-002PHI Export GateHIPAA §164.312

Requires human approval before any protected health information leaves the governed environment.

Trigger

data_classification = phi AND output_type = export

Enforcement Action

REQUIRES_APPROVAL — dual sign-off required

POL-003Production Deploy GateSOX Section 404

High-risk production deployments require explicit governance approval and dual-approval workflow.

Trigger

task_type = production_deploy AND risk_score > 60

Enforcement Action

REQUIRES_APPROVAL — change advisory review

POL-004SOX Dual-ApprovalSOX Section 302/906

Financial AI decisions above materiality threshold require two independent approvers.

Trigger

task_type = financial_decision AND risk_score > 55

Enforcement Action

REQUIRES_APPROVAL — CFO + Compliance sign-off

POL-005Critical Risk BlockEU AI Act Art. 6 (prohibited practices)

Any workload scoring above the critical risk threshold is automatically blocked, with no override path.

Trigger

risk_score >= 85

Enforcement Action

BLOCKED — hard stop, no override

POL-006GDPR Residency GateGDPR Art. 44-49 (Chapter V)

Personal data processed by AI must not leave the declared residency region without explicit data transfer agreement.

Trigger

data_classification = personal AND cross_region = true

Enforcement Action

BLOCKED — residency violation

§7

Audit Lineage Chain

Every governed workload must produce an immutable lineage chain — a sequence of hops recording what happened at each layer. Lineage chains are the primary artefact for regulatory audit under EU AI Act Article 13 (transparency) and Article 17 (quality management).

KGS-L01

signal hop

Recorded at L1 validation. Must include: signal_id, confidence_score, layer2_gate, gate_reason, recorded_at.

mandatory
KGS-L02

governance hop

Recorded at L3 evaluation. Must include: decision_id, risk_score, risk_level, decision_status, audit_ref.

mandatory if gate open
KGS-L03

execution hop

Recorded on workload completion. Must include: workload_id, provider, model, execution_status, cost_usd.

required
KGS-L04

hop immutability

Lineage hops are append-only. No hop may be modified or deleted after recording. Keyed by signal_id.

enforced

KGS-RULE-003: Lineage recording is best-effort and non-blocking. A lineage failure MUST NOT prevent legitimate workload execution. However, gaps in lineage chains MUST be flagged in the audit report.

§8

Risk Scoring Model

The KGS risk score is a composite 0–100 integer computed from four weighted dimensions.

Task Sensitivity

35%

Risk inherent to the task type (e.g. production deploy = high, read-only query = low)

Data Sensitivity

30%

Classification of data being processed (PHI, PII, financial, public)

Compliance Impact

20%

Whether the workload triggers any compliance frameworks (SOX, HIPAA, GDPR)

Signal Confidence

15%

Inverse of the composite signal confidence score from Layer 1

Low

0–39

Medium

40–69

High

70–84

Critical

85–100

§9

Regulatory Compliance Mapping

KGS-1.0 maps directly to the following regulatory frameworks.

KGS ComponentEU AI ActGDPRSOXHIPAA
Signal Validation (L1)Art. 9 — Risk ManagementArt. 25 — Data Protection§164.312(b)
Inference Gate (L2)Art. 6 — Prohibited PracticesArt. 22 — Automated Decisions§164.308(a)(1)
Governance Evaluation (L3)Art. 13 — TransparencyArt. 17 — Right to ErasureSec. 302/404§164.312(c)(1)
Audit Lineage ChainArt. 17 — Quality ManagementArt. 30 — Records of ProcessingSec. 802/906§164.312(b)
Risk ScoreArt. 9 — Risk ClassificationArt. 35 — DPIASec. 404§164.308(a)(8)
§10

Implementation (SDK)

The reference implementation of KGS-1.0 is available as an open-source Python SDK.

# Install

pip install klynx

# Evaluate any AI action through the governance pipeline

from klynx import evaluate

 

result = evaluate(

action="marketing_campaign",

signal={"inference_confidence_score": 0.88, ...}

)

 

print(result.decision) # "approved"

print(result.risk_score) # 24

print(result.audit_ref) # "AUD-3F8A2C91B4"

The SDK has zero hard dependencies and requires only Python 3.8+. It connects to the Dragon governance API at dragon.klynxai.com by default.

Try it live in the demo →

KGS-1.0 · Published March 2026 · Klynx AI · klynxai.com

Reference implementation: Dragon · dragon.klynxai.com