The Chair of the US Securities and Exchange Commission (SEC), Gary Gensler, has issued a stark warning regarding the urgent need for financial regulators to address the risks associated with the increasing concentration of power in artificial intelligence (AI) platforms. He emphasized that if swift intervention is not undertaken, it is “nearly unavoidable” that AI could be the catalyst for a financial crisis within the next decade.
Gensler’s concerns revolve around the challenges that AI poses to the stability of the financial system, with particular emphasis on the role of powerful AI platforms. According to a report by InfoStride News, Gensler emphasized that the task of shaping AI regulation represents a formidable challenge for US regulators. This is because the potential risks associated with AI extend across financial markets and originate from models developed by technology companies that often operate outside the purview of traditional Wall Street watchdogs.
The core issue at hand, as outlined by Gensler, is the significant concentration of power within a limited number of AI platforms. He highlighted the systemic risks associated with these platforms, noting that if any of them were to fail or fall victim to a cyberattack, it could have a ripple effect with severe consequences for the entire financial system.

Gensler’s warnings come at a time when AI is becoming increasingly integrated into the financial sector. AI is being employed for a range of critical tasks in finance, including fraud detection, risk assessment, and investment management. While AI offers substantial benefits, it also introduces complexities that need to be carefully managed to safeguard the financial system’s stability.
The SEC previously proposed a rule in July aimed at addressing potential conflicts of interest in predictive data analytics. However, this rule predominantly focused on individual AI models utilized by broker-dealers and investment advisers. Gensler pointed out that even if these measures were updated, they would not adequately address the overarching challenge presented by AI models that are the backbone of various financial institutions and are often hosted by major technology companies, not within the confines of broker-dealers or investment firms.
Furthermore, Gensler raised a pertinent question concerning the number of cloud providers in the country that offer AI as a service. This is a crucial consideration because many financial institutions rely on external cloud providers for their AI capabilities. The interconnected nature of AI systems and data sharing further complicates regulatory efforts, making it a cross-regulatory challenge.
In addressing the global landscape of AI regulation, Gensler acknowledged that regulators worldwide are grappling with the unique challenges posed by AI, primarily because tech companies and their AI models do not fit neatly into the purview of existing regulatory frameworks. The European Union (EU) has taken swift action by drafting comprehensive legislation to govern the use of AI, with the law expected to be fully approved by the end of the year. In contrast, the United States is currently in the process of reviewing AI technology to identify areas that require new regulation and areas subject to existing laws.
Gensler’s paramount concern is that the reliance on the same data models by multiple entities within the financial sector may lead to herd behavior. This behavior could undermine financial stability and potentially trigger the next financial crisis. He envisages a future financial crisis where post-event analyses would reveal that a single data aggregator or model was heavily relied upon, be it in the mortgage market or a specific sector of the equity market.
The profound impact of AI on financial markets, driven by what Gensler refers to as the “economics of networks,” makes the occurrence of such a crisis “nearly unavoidable.” He predicts that this crisis could manifest as early as the late 2020s or early 2030s.
In summary, Gensler’s warning serves as a poignant reminder of the critical need for regulators to proactively address the challenges posed by AI’s growing influence in the financial sector. The potential for AI-related systemic risks underscores the urgency of implementing effective regulatory measures that can strike a balance between harnessing the benefits of AI and safeguarding the financial system against future crises.
Support InfoStride News' Credible Journalism: Only credible journalism can guarantee a fair, accountable and transparent society, including democracy and government. It involves a lot of efforts and money. We need your support. Click here to Donate