
Tin Pei Ling, co-president of MetaComp, named a specific problem at Money20/20 Asia in Bangkok on April 21, 2026 that every financial institution deploying AI agents has and that none have solved. When a human leaves an organisation, their access is revoked, she said. When an AI agent completes a transaction, its identity and permissions do not automatically expire. This is not a philosophical observation about AI accountability. It is a description of a concrete gap in the access management infrastructure that governs financial services operations. AI agents initiating payments, executing compliance decisions, and managing portfolios are doing so through credentials and authorizations that were not designed to expire, be scoped to a specific task, or be revoked when the task completes.
MetaComp, a Singapore-based licensed financial institution, published the StableX Know Your Agent Framework at that event, describing it as the first governance architecture for AI agents from a regulated financial institution. The framework addresses four questions that the financial services industry has not answered systematically: who is this agent, what is it permitted to do, how do we know it is behaving as intended, and who is accountable when it does not.
Singapore’s Infocomm Media Development Authority had already laid regulatory groundwork. In January 2026, IMDA published the world’s first cross-sector Model AI Governance Framework for Agentic AI. MetaComp’s KYA framework was built in explicit alignment with that IMDA framework, with direct engagement with IMDA during the drafting process. Singapore’s Budget 2026 went further, designating finance as one of four national AI mission sectors and establishing a National AI Council chaired by Prime Minister Lawrence Wong with a mandate to create regulatory sandboxes for AI innovation in financial services.
The Governance Gap the KYA Framework Is Designed to Close
McKinsey’s 2026 State of AI Trust survey, cited in the KYA framework documentation, found that fewer than one in three organizations have adequate governance and controls in place to oversee AI agents, even as those agents are already initiating payments, executing compliance decisions, and managing portfolios at scale. PwC’s Global AI Performance Study 2026 found that while Singapore businesses lead globally on AI adoption, only 47 percent have a documented responsible AI framework, compared to 63 percent among global AI leaders.
MetaComp’s own data from more than 7,000 real-world transactions across hybrid fiat and blockchain environments adds a specific technical measurement to the governance problem. In those transactions, relying on a single screening tool left up to 25 percent of high-risk exposures undetected. In an environment where AI agents initiate transactions autonomously, that 25 percent false-clean rate does not represent a compliance backlog for human review. It represents transactions the agent processed and completed without human intervention, each carrying a compliance gap that existing controls missed.
The KYA framework is designed to make AI agent deployments in financial services governable by treating every agent as a regulated entity with a defined lifecycle, not as an automation script that happens to use a large language model. The framing mirrors how financial institutions already treat human employees and system accounts: defined identity, scoped authorization, continuous monitoring, and clear accountability chains. KYA extends that governance model to AI agents, with architecture specific to the ways agent behavior differs from both human behavior and traditional deterministic automation.
The Four Pillars: What KYA Actually Specifies
The KYA framework is organized around four pillars, each addressing a distinct dimension of agent governance that existing financial services control frameworks do not adequately cover.
Agent Identity and Registration requires that every AI agent operating within the KYA architecture be assigned a verified identity linked to a real-world individual or legal entity through a tamper-resistant registry. The registry assigns each agent a persistent identity that survives across sessions, connects it to the institution or individual accountable for its actions, and creates the audit trail necessary for regulatory accountability. The identity requirement is structurally different from service account identity in traditional IT governance. KYA agent identity is tied to a specific AI agent with a specific purpose and specific risk profile. An agent that processes cross-border payments has a different identity and risk profile than an agent that reads market data and generates reports, even if both run on the same underlying model.
Authority and Permission Control defines the actions each agent is permitted to take, the thresholds beyond which human escalation is required, and the mechanisms for scoping and revoking agent permissions dynamically. This is the mechanism that addresses the expiry problem Tin Pei Ling described. Under KYA, agent permissions are not simply granted at deployment and maintained indefinitely. They are scoped to the agent’s defined purpose and subject to automatic expiry or suspension when the agent completes a task, when risk conditions change, or when anomalous behavior is detected. The human escalation threshold is the practical governance control that makes autonomous agent operations compatible with financial regulation. A payment exceeding the threshold automatically routes to a human approver before execution.
Behaviour Monitoring and Risk Intelligence provides continuous observation of agent actions in real time, with dynamic risk profiling that updates as the agent’s behavioral pattern accumulates. An agent that begins exhibiting patterns inconsistent with its defined purpose, calling APIs outside its normal scope, processing transactions with characteristics outside its historical pattern, or accessing systems not required for its stated function, triggers a behavioral alert and potential automatic suspension pending review. The authenticated record-keeping requirement in this pillar creates the technical foundation for regulatory accountability. Every action an agent takes must be logged in a format that supports audit and forensic investigation. This is the technical substrate that makes regulatory examination of AI agent behavior possible, which is a prerequisite for regulators to approve the deployment of AI agents in regulated financial functions.
Ecosystem and Interaction Governance addresses the multi-agent scenario that the other pillars do not fully cover: what happens when AI agents communicate with other AI agents, either within the institution or across institutional boundaries. MetaComp’s framework extends the FATF Travel Rule principles, which currently require financial institutions to exchange verified sender and recipient identity information in cross-border transactions, to agent-initiated and agent-to-agent transactions. An agent that initiates a payment to another institution must transmit verified identity information about itself and the institution it represents. An agent receiving that payment must verify the sender agent’s identity before processing. This is the governance layer that connects to the A2A protocol’s Signed Agent Cards at the technical level. A2A’s cryptographic card verification establishes that an agent card was issued by the domain it claims to represent. KYA’s FATF Travel Rule extension establishes what verified information must be exchanged during agent-to-agent financial interactions.
The MCP Integration: Why This Connects to Claude and Claude Code
MetaComp’s framework is not purely regulatory architecture. It ships with working implementation through the AgentX Skill ecosystem, which makes MetaComp’s regulated financial infrastructure accessible to AI agents through MCP. The first deployed Skill, VisionX Know Your Transaction, wraps MetaComp’s AML/CFT compliance engine into a single agent-callable tool that combines more than four blockchain analytics vendors in parallel. An AI agent using Claude, Claude Code, or any MCP-compatible platform can call this Skill to run transaction screening against MetaComp’s compliance infrastructure as a single tool invocation.
This is the practical demonstration that KYA is governance architecture for real deployed systems, not a regulatory proposal. MetaComp built the compliance infrastructure, validated it against 7,000 real transactions, deployed it as an MCP-accessible Skill, and wrote the governance framework to govern how agents using that Skill are themselves governed. The framework and the implementation are the same project. The AgentX ecosystem will expand to cross-border payments, treasury, and wealth management Skills by late Q2 2026, with the KYA governance architecture applied consistently across all agent-initiated financial operations on the StableX Network.
Why Singapore Built This First
Singapore’s regulatory environment has produced the first serious institutional governance framework for AI agents in financial services for reasons that are not coincidental. The Monetary Authority of Singapore operates through principles-based regulation and active engagement with financial innovation rather than prescriptive rule-making. MAS’s sandbox approach allows licensed institutions to test new products in a controlled environment with regulatory oversight before general deployment, creating a pathway for governance frameworks like KYA to be developed and tested with regulatory input rather than deployed in anticipation of regulation that might never align with what was built.
The IMDA Model AI Governance Framework for Agentic AI, published in January 2026, is the regulatory foundation that makes institutional frameworks like KYA possible. IMDA consulted broadly across industry and government in developing the framework and established the principles that KYA operationalizes for the financial services context: human accountability, technical controls, adaptive governance, and risk-proportionate oversight. MetaComp’s direct engagement with IMDA during the KYA drafting process means the framework was built with regulatory input rather than against a regulatory void.
The global dimension matters. MetaComp describes KYA as authored in Singapore, designed for the world. The FATF Travel Rule extension in the framework’s ecosystem pillar is designed to integrate with anti-money laundering and counter-terrorism financing requirements that apply across jurisdictions. A financial institution in any jurisdiction that deploys AI agents initiating cross-border payments faces the same accountability question that KYA addresses: who is this agent, who is accountable for its actions, and what information must travel with a transaction it initiates. The Singapore regulatory environment produced the first answer. It will not be the only one.
What KYA Does Not Solve
MetaComp is explicit that KYA is a first draft, not a final answer. We are not presenting this as a finished answer, Tin said at Money20/20 Asia. We are asking financial institutions, regulators, and technology partners to adopt it, challenge it, and build on it with us. This honesty about limitations is worth taking seriously.
The framework covers governance architecture for agents operating within defined financial services functions. It does not address the adversarial security scenarios that the MCP-SafetyBench research found no current LLM achieves simultaneously: high defense success and high task success. An agent operating under KYA’s governance constraints is still susceptible to prompt injection attacks, tool poisoning, and the multi-turn attack sequences that MCP-SafetyBench documented. KYA’s behavior monitoring pillar is designed to detect anomalous actions after they occur. It does not prevent the model-level vulnerabilities that allow malicious inputs to redirect an agent’s behavior in the first place.
The framework also does not yet cover the cross-institutional governance scenario where agents from different institutions operating under different national regulatory frameworks interact. The FATF Travel Rule extension provides a starting point, but the verification infrastructure for confirming that a counterparty agent’s governance standards meet the receiving institution’s requirements does not yet exist. Building that cross-border agent identity verification infrastructure is a multi-year regulatory coordination project. KYA names the problem. The solution will require the same kind of multilateral regulatory engagement that the original FATF Travel Rule required to implement across jurisdictions.
The publication of KYA alongside working MCP-accessible Skills, under a governance framework built with IMDA input, backed by $35 million in institutional funding, and deployed against validated real transaction data, is a meaningful advance. The OX Security data showing fewer than 1 in 3 organizations have adequate agent oversight controls describes the problem KYA is addressing. A2A v1.0’s Signed Agent Cards give the protocol-layer identity verification that KYA’s governance layer builds on. Together, they represent the beginning of a coherent architecture for AI agents that can be deployed in regulated environments without the accountability gap that currently exists. Singapore built the first institutional answer. The financial services sector’s response to it will determine how quickly governance architecture scales to match the pace of agent deployment.