The head of the United States Securities and Exchange Commission (SEC), Gary Gensler, has issued a stark warning about the urgent need for financial regulators to address the potential risks associated with the concentration of power in artificial intelligence (AI) platforms. Gensler’s apprehension stems from his belief that, without prompt intervention, the widespread adoption and application of AI could potentially trigger a financial crisis within the next decade.
In his recent statements reported by Infostride News, Gensler emphasized the critical importance of shaping AI regulations to mitigate these risks. He acknowledged that crafting suitable regulations for AI represents a formidable challenge for U.S. regulators due to the multifaceted nature of the threat. The risks posed by AI are not confined to a particular sector but rather permeate across various financial markets. Furthermore, these risks are inherently linked to models and technologies developed by tech companies that often operate outside the purview of traditional Wall Street regulatory bodies.
Gensler voiced his concerns, stating, “There is a lot of concentration of power in these few AI platforms. If one of these platforms were to fail or be hacked, it could have a systemic impact on the financial system.” This fear underscores the potential vulnerabilities in the financial ecosystem brought about by the growing reliance on AI technologies.

The integration of AI in the financial sector is gaining momentum, with applications ranging from fraud detection and risk assessment to investment management. While these applications promise enhanced efficiency and accuracy, they also introduce complex challenges in terms of oversight and regulation.
Gensler acknowledged the complexity of addressing these challenges, noting that traditional financial regulations are predominantly focused on individual institutions, such as banks, money market funds, and brokers. However, AI’s influence extends beyond these individual entities, posing a unique “horizontal” challenge. Many institutions may rely on the same underlying AI model or data aggregator, which could potentially exacerbate systemic risks.
The SEC had previously taken a step towards addressing AI-related concerns by proposing a rule that targeted potential conflicts of interest in predictive data analytics. However, this rule primarily concentrated on individual models deployed by broker-dealers and investment advisers, leaving the broader issue of AI’s horizontal impact largely unaddressed.
Gensler pointed out the limitations of the current regulatory framework, even if updated, by saying, “it still doesn’t get to this horizontal issue, if everybody’s relying on a base model, and the base model is sitting not at the broker-dealer but at one of the big tech companies.” He also raised the question of how many cloud service providers, which often offer AI as a service, operate in the United States. These concerns highlight the need for a comprehensive and collaborative regulatory approach.
Gensler’s proactive approach to addressing AI-related risks is not limited to domestic initiatives. He revealed that he has raised this issue at the Financial Stability Board and the Financial Stability Oversight Council, recognizing it as a cross-regulatory challenge that extends beyond U.S. borders.
The challenge of regulating AI is not unique to the United States. Regulators worldwide are grappling with the issue of how to effectively oversee and manage AI technologies. Tech companies and their AI models often elude traditional regulatory structures, making it imperative to develop innovative solutions to safeguard financial stability.
In contrast to the United States, the European Union (EU) has taken swift and assertive action by drafting stringent measures to govern the use of AI. This landmark legislation is anticipated to be fully approved by the end of the year, setting a precedent for AI regulation. In the U.S., the approach involves a comprehensive review of AI technology to determine which aspects require new regulation and which can be governed by existing laws.
Gensler’s primary concern is the potential for herd behavior stemming from decisions made based on the same data models. Such behavior can undermine financial stability and potentially unleash the next financial crisis. He foresees a scenario in which, in the aftermath of a financial crisis, reports will identify a singular data aggregator or model that the financial industry heavily relied on, potentially in the mortgage market or a specific equity sector.
Gensler highlighted the inherent power of AI’s “economics of networks,” making it almost inevitable that a financial crisis will occur. He cautioned that this crisis could materialize as early as the late 2020s or early 2030s, emphasizing the pressing need for regulatory action to avert such a catastrophic outcome.
In conclusion, Gary Gensler’s warning serves as a clarion call for regulators, both in the United States and globally, to address the systemic risks posed by the concentration of power in AI platforms. The financial industry’s increasing reliance on AI technology demands a proactive and collaborative approach to regulation, as the potential consequences of inaction are severe. It is imperative for regulators to navigate the complex landscape of AI and develop comprehensive frameworks that safeguard financial stability in an era of unprecedented technological advancement.
Support InfoStride News' Credible Journalism: Only credible journalism can guarantee a fair, accountable and transparent society, including democracy and government. It involves a lot of efforts and money. We need your support. Click here to Donate