Install the SDK
One pip install. Hooks into LangChain's native callback system without modifying your agent code. Works with your existing orchestration setup.
from regulatediam import AuditLayer
agent = AuditLayer.wrap(your_agent)
Tracient logs every AI agent action and generates regulator-ready DORA and EU AI Act evidence packs on demand. Purpose-built for compliance officers and DPOs at regulated fintechs, neo-banks, and payment institutions.
The compliance gap
DORA Articles 9, 12, and 13 require regulated financial institutions to maintain auditable records of every automated process. When your AI agent queries your transaction database, calls a sanctions API, or flags a customer account — that is a regulated action. Today, most compliance officers cannot answer the regulator's first question: what did your agents access, and when?
Your IAM platform covers your people. Your GRC tool maps your policies. Neither captures what your AI agents actually did — and neither produces the evidence pack your auditor will ask for. Tracient does.
See how it works →How it works
Tracient sits as a thin SDK layer between your AI agents and your systems. No changes to your existing agent code. No new infrastructure. No replacement of your existing compliance tooling — it fills the gap they leave.
One pip install. Hooks into LangChain's native callback system without modifying your agent code. Works with your existing orchestration setup.
Agent identity, model version, every tool call, data source, permissions, trigger, and timestamp — all logged automatically in a tamper-evident audit store.
Generate a formatted DORA Article 9, 12, and 13 evidence pack — or EU AI Act transparency documentation — at the click of a button. Pre-structured for regulators.
Regulatory coverage
Every agent action is mapped to the specific articles your institution is subject to — so you're never manually translating raw logs into evidence.
In force January 2025 · Enforcement ongoing
Requires regulated financial entities to maintain auditable ICT risk frameworks. All automated processes — including AI agents — must be logged, governed, and demonstrably controlled.
High-risk provisions: 2 August 2026
Classifies AI in credit scoring, fraud detection, and underwriting as high-risk. Requires automatic event logging, human oversight mechanisms, and transparency documentation.
Transposition deadline: October 2024
Extends cybersecurity obligations across financial services. Requires incident reporting, audit trails of ICT events, and supply chain security measures for all significant systems.
Who it's for
Compliance and risk professionals at regulated financial institutions deploying AI agents — who cannot yet demonstrate to a regulator what those agents are doing.
Head of Compliance
Series B fintech · DORA regulated
"We've deployed three AI agents this quarter. Our SailPoint instance covers human identity. Nobody has an answer for what the agents are doing."
Chief Risk Officer
Neo-bank · Payment institution licence
"Our next audit is Q3. When they ask for the agent access log, I need to hand them something — not explain that we haven't built it yet."
Chief Information Security Officer
Insurtech · Solvency II + DORA scope
"We know what every human user touches. We have no idea what our underwriting agent accessed last Tuesday. That's a gap I can't defend."
FAQ
Everything you need to know before requesting a pilot.
SailPoint and Saviynt govern human identities — your employees. They have no visibility into AI agents querying your databases or calling your APIs. Tracient provides the same governance layer for your non-human identities, in a format that maps directly to DORA and EU AI Act requirements. The two are complementary, not competing.
GRC platforms are excellent at mapping your policies, documenting controls, and tracking framework obligations. What they cannot do is capture what your AI agents actually did at runtime — the tool calls, data sources, timestamps, and permissions that constitute the evidence itself. Tracient generates that runtime evidence. Your GRC platform then has something real to attach to the control. They work together.
Early access supports LangChain natively — the most widely deployed framework in regulated financial services. CrewAI, LlamaIndex, and AutoGen integrations are on the roadmap for Q3 2026. If your agents use a custom orchestration layer, we can discuss during the pilot.
Logs are stored in a tamper-evident, append-only store with EU data residency. We capture structural metadata — agent ID, tool, resource, timestamp, permission — not the content of prompts or outputs. Full data processing agreement available on request. SOC 2 Type II in progress for Q4 2026.
A 90-day pilot at no cost. We install the SDK in your environment, configure the DORA and EU AI Act mappings for your specific agent workflows, and generate your first evidence pack within a week. In exchange: a weekly feedback call and the right to reference your experience as an anonymised case study.
Early access organisations lock in founding pricing: £750/month for up to 5 agents, £1,800/month for up to 25. Scale pricing for larger estates on request. All plans include DORA and EU AI Act evidence packs, unlimited log storage, and the compliance dashboard.
Early access
Join compliance and risk professionals from regulated fintechs, neo-banks, and payment institutions getting early access before the EU AI Act deadline.
90-day free pilot · EU data residency · No credit card