Framework / Governance Model

GTAF Reference

Governance & Trust Architecture Framework

DOI badge for GTAF ReferenceGitHub repository for GTAF Reference

GTAF is TNT's governance model for delegated action in AI and automated systems. It addresses a practical gap many organizations now face: policy says oversight is required, but operating models for legitimate delegation are often still missing.

Instead of treating governance as prose, GTAF treats boundaries, decision layers, responsibility, and validity as explicit artifacts. That makes the model usable for real implementation, not just for conceptual positioning.

These screenshots show the current public GTAF reference. They document how the model is already expressed publicly, but the core remains the governance structure and artifact logic itself.

Public normative reference

A governance framework for AI systems that may act, delegate, or produce consequential effects. GTAF turns scope, authority, responsibility, and validity into structured operational artifacts.

What TNT built

GTAF is TNT Intelligence's governance framework for AI systems that may act, delegate, or produce consequential external effects.

Its central contribution is simple but materially different from most AI governance talk: it turns scope, authority, responsibility, validity, and intervention into explicit operational artifacts instead of leaving them distributed across prose, implicit conventions, or after-the-fact review.

System Boundary

Defines what is inside scope, what is outside, and under which conditions a delegated capability is even relevant.

Decision Record

Describes which action may be delegated, with which constraints, conditions, and expected effect.

Responsibility Binding

Makes outcome ownership and escalation responsibility explicit instead of assuming that authority somehow "comes with the system."

Delegation Readiness Check

States whether delegation is actually permitted in the current scope, version, and validity window.

Why this matters under the AI Act

GTAF can be read as an operationalization layer for many of the practical questions organizations face under the EU AI Act.

It is not legal advice and it does not replace legal interpretation. What it does provide is an operating structure for turning governance obligations into explicit system design and decision artifacts.

AI Act concernHow GTAF helps operationalize it
Risk classification and intended use

GTAF forces explicit scoping of use case, action surface, impact boundary, and validity conditions.

Governance and oversight

Responsibility bindings, decision layers, and intervention paths make authority and escalation legible.

Documentation and traceability

Structured artifacts and references create a stronger base for technical documentation and auditability.

Lifecycle control

Validity windows, revalidation logic, and controlled change keep delegation from becoming permanent by accident.

Minimal operating model

GTAF is easiest to understand as a chain that must stay intact from scope definition to permission state:

The permission state is not attached to an agent by default. It is the outcome of an explicit artifact chain that stays valid only inside a bounded operating context.

Delegation is not a property of the model. It is a property of an explicitly bounded and currently valid operating context.

Minimal artifact example

A simplified artifact chain might look like this:

YAML
system_boundary:
id: sb.invoice.review.eu
scope: "invoice review up to EUR 5000"

decision_record:
id: dr.invoice.auto-approve
action: "invoice.approve"
constraints:
  supplier_status: "approved"
  max_amount_eur: 5000

responsibility_binding:
owner: "finance.operations"
escalation: "head-of-finance"

delegation_readiness_check:
status: PERMITTED
valid_until: "2026-12-31T23:59:59Z"
A simplified example of how GTAF expresses scope, delegation, ownership, and validity as separate artifacts rather than implicit configuration.

This is the key move: GTAF does not say "the agent is trustworthy." It says that a specific delegated action is or is not permitted within a defined boundary and time window.

How TNT can help

GTAF is relevant for organizations that need more than AI policy language. Typical collaboration paths include:

  • structuring AI use cases into decision layers, scope boundaries, and accountable operating models

  • translating oversight and governance requirements into explicit technical artifacts

  • reviewing whether a planned AI system can be delegated safely at all, and under which conditions

  • preparing the path from governance design into runtime enforcement and implementation

Where this lands first

Enterprise AI governance programs

When organizations already have policy language but still lack an operational way to define who may delegate what, under which conditions, and for how long.

High-consequence internal automation

When AI systems move into approval paths, operational tooling, internal support, or business workflows where scope and accountability cannot stay implicit.

Continue in publications

Deterministic runtime enforcement core

GTAF Runtime

The enforcement core that turns evaluated governance outputs into executable allow/deny decisions. The public implementation demonstrates the contract, but the runtime model is broader than one language.

See enforcement

Integration layer around the runtime core

GTAF SDK

The adoption layer that helps real systems load artifacts, shape execution context, and call the runtime cleanly. The public implementation is one concrete path, but the integration model is not language-specific.

Understand integration

Infrastructure for Trusted Autonomy

ITA

A runtime architecture for systems that must act with real effect while staying governable. ITA extends GTAF into execution spaces, capability exposure, enforcement, and audit.

See the architecture

Talk to TNT when AI should do real work

From GTAF through Runtime and SDK to ITA, TNT already brings public reference work, runtime building blocks, and applied architecture into these questions. The conversation does not have to start at theory.

When these questions move from interest to implementation, TNT is a serious conversation partner.

Discuss your context