AI in Finance: Key Insights from the SEC’s Landmark 2025 Roundtable

In late March, the U.S. Securities and Exchange Commission (SEC) hosted a landmark roundtable on artificial intelligence (AI) in financial services. Held in Washington, D.C., the event brought together regulators, technologists, market participants, and legal experts to explore the evolving landscape of AI – from transformative innovation to systemic risk.
This wasn’t about rulemaking – at least not yet. It was about listening, signalling, and setting the stage for governance. But for capital markets firms, the message was clear – AI adoption is accelerating faster than regulatory frameworks can adapt, and waiting for clarity could leave firms and their clients exposed.
Here are five actionable insights from the roundtable for firms to consider in their AI strategy.
AI Is Already Embedded – But Still Poorly Defined
A recurring theme in the opening remarks by Acting Chair Mark Uyeda and echoed throughout the first panel was the challenge of regulating a technology that remains ill-defined. One speaker emphasized that regulation should not be based on “artificial fears,” and Uyeda warned against overly prescriptive approaches that stifle innovation.
Panellists expressed concerns over the EU AI Act’s broad and arguably unworkable definition of AI. One noted that the definition was so expansive it could “include human thought,” while another likened the current AI definitional debates to earlier regulatory struggles over high-frequency trading, suggesting that an exact definition may be unnecessary and potentially stifling.
Instead, speakers recommended classifying AI based on functionality and risk. Several urged firms to build internal inventories that categorize AI tools by use case, risk profile, and deployment context, enabling oversight and control without waiting for regulators to provide a top-down taxonomy.
Governance Must Be Proactive, Not Performative
Governance emerged as a central theme throughout the third panel, with senior executives detailing the structures they’ve implemented to manage AI at enterprise scale.
One speaker described their firm’s AI governance strategy, including transparency about AI-generated outputs, strong second-line risk oversight, and a shift from traditional model review approaches toward more flexible, risk-based methods suited to generative AI. They emphasized that while traditional model validation methods were still being used, they are often insufficient for newer AI technologies.
Another panelist shared how their firm established a cross-functional AI council and risk assessment frameworks to guide AI experimentation across departments. A different speaker emphasized the need to calibrate oversight based on the risk posed by each use case – internal vs. external, low-risk vs. high-stakes.
All agreed that governance is not solely a technology issue. Rather, effective oversight must integrate legal, compliance, risk, and business leadership from the start.
Agentic AI Is a Game-Changer – and a Red Flag
Agentic AI – systems capable of autonomous decision-making and spawning sub-agents – drew heightened attention during the second and fourth panels. One panelist described agentic AI as a paradigm shift, warning that systems which interact across APIs and external environments without human intervention create “a bird of a different colour” when it comes to traceability and responsibility.
Another speaker echoed this, stating that agentic AI could bring transformational efficiencies in compliance and operations, but also highlighted the difficulty of building sufficient auditability into such systems. Other panellists discussed use cases where agentic systems are being explored for back-office functions like data mapping and settlement optimization, while stressing the continued necessity of human-in-the-loop controls.
The consensus – firms must move quickly to assess where agentic AI may already be in use, even inadvertently, and implement clear oversight, transparency, and vendor disclosure obligations before these systems proliferate further.
AI-Fuelled Fraud Is Rising – Fast
The roundtable’s second panel provided a sobering look at the role of AI in escalating fraud. Speakers explained how fraudsters are now combining deepfake tools with social engineering tactics to bypass authentication measures and manipulate retail clients at scale.
Panellists emphasized that AI is amplifying long-standing attack vectors by increasing speed, personalization, and realism. One described how fraud schemes are now exhibiting “hyper-personalization” and can “dynamically change” based on their targets, with generative AI enabling more realistic and adaptable scams that evolve to bypass detection.
The discussion underscored the critical need for capital markets firms to shift security postures outward – extending protections to client interfaces, onboarding channels, and investor communications. AI must be deployed not just to detect fraud, but to simulate and anticipate how it will be weaponized.
The Skills Gap and Shadow AI – Are Real
Despite surging interest, enterprise-grade adoption of AI remains uneven, with many firms constrained by limited expertise and infrastructure. One speaker noted that firms often struggle to quantify the return on investment when factoring in compute costs, governance overhead, and regulatory uncertainty.
Simultaneously, another panelist warned of a growing “shadow AI” phenomenon: younger employees experimenting with generative tools outside official channels. Rather than clamp down, they advocated for structured enablement – internal sandboxes, training programs in prompt engineering, and safe experimentation environments to harness that enthusiasm productively.
Other firms reported similar strategies, with broad upskilling initiatives and cross-department AI working groups aimed at integrating new talent and reducing organizational friction.
The Clock Is Ticking
The SEC’s roundtable was not a warning – it was a weather report. The regulatory skies are shifting. As Acting Chair Mark Uyeda noted, “AI may create gaps in our regulatory structure,” but the goal is to engage, not entrench, and avoid prescriptive rules that could become obsolete.
Still, the underlying message was unmistakable – firms that move forward without solid governance, risk classification, and oversight mechanisms are gambling with more than just technology – they’re gambling with trust, transparency, and regulatory tolerance.
As one panelist emphasized, in the absence of clear regulatory guidelines, firms must proactively establish their own principles and controls. This includes ensuring transparency, implementing robust oversight, and conducting thorough evaluations of AI tools. Understanding and managing the risks associated with AI technologies is critical; waiting for prescriptive regulations may not be a viable strategy in an industry defined by rapid innovation.
In summary, the SEC’s roundtable highlighted the urgency for financial institutions to take initiative in governing AI applications. Proactive measures today can safeguard against potential pitfalls tomorrow, ensuring that technological advancements align with the principles of trust and integrity that underpin the financial industry.
Subscribe to our newsletter