For better or worse next-generation agentic AI systems transform financial markets, underscoring the need for strong governance frameworks. On one hand, it promises to transform data-rich but insight-poor domains into engines of financial inclusion and stability. On the other, debate rages on as to how it might change hiring and regulatory supervision.
AI opens a ‘black box’ on emerging market debt
In late September, an exclusive report revealed that UK-based AI firm Galytix, is building a framework for ingesting and analyzing data from the Global Emerging Markets Database (GEMs). Investors have traditionally been wary of lending to emerging markets due partly to the lack of guidance on how to extract actionable insights into them. GEMs, created in 2009 as a collaboration between international development banks, currently holds decades of detailed risk information on sovereign and corporate borrowers in developing countries.
The new platform aims to turn that wealth of data into accurate and actionable insights by highlighting real-time risk indicators and revealing where perceived risk diverges from the reality. The potential on financial inclusion in developing countries is significant. Development banks typically price loans high because investors overestimate risk. However, if AI can demonstrate that certain borrowers present less credit risk than previously assumed, it could attract private capital and lower borrowing costs. At the same time, the platform could also identify earlier signs of distress, allowing lenders to intervene before crises become systemic.
Financial stability risk and herd behavior in AI agents
While AI promises faster insight into emerging markets, there’s a growing need to supervise how they work and hold people accountable to the decisions they make—especially when they’re informed by AI. Concerns persist regarding whether AI will replace—rather than augment—human labor or that financial decision-makers will come to rely too much on AI-powered decision-making.
In September, the U.S. Federal Reserve released a paper exploring how AI agents behave in simulated markets by replicating trading scenarios where herd behavior has long been commonplace. The results were surprising: AI agents relied more on private information rather than market trends, leading to fewer price bubbles under most conditions. However, when the AI agents were guided to maximize profits with no consideration to social welfare, the results demonstrated mispricing and increased volatility.
“Our results show that AI agents make more rational decisions than humans, relying predominantly on private information over market trends,” the Fed paper noted. “However, exploring variations in the experimental settings reveals that AI agents can be induced to herd optimally when explicitly guided to make profit-maximizing decisions.”
Herding may improve market discipline, but “this behavior still carries potential implications for financial stability,” the report said.
AI’s impact on financial stability depends heavily on how models are trained and the objectives they’re given. Strong governance frameworks will be critical in ensuring that AI benefits financial markets and their constituents, rather than advancing instability and exclusion.