The AI Elephant in the Room: Another Future Financial Mess?

SEC Chair Gary Gensler guesses that the next crisis is less than a decade away.

Artificial intelligence has many faces and aspects, but it’s not the ultimate evil or savior of anything. No one subsector of the technology is the clear choice to do everything people consider, or necessarily anything.

While keeping a balanced view — neither enthralled acolyte nor determined vampire hunter — it’s important to keep an eye out for the types of things that could go wrong. Not because of the innate flaws of the technology, but the truly dumb things that people would end up doing with it.

That’s the gist of what SEC Chair Gary Gensler told the Financial Times in a recent interview. And it makes perfect sense if you consider some of the crazy things that have been done in the past. Because, in the end, AI is a tool people use.

“It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do,” Gensler said. “And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”

Or all relying on the same basic business strategy. Go back to 2008 and 2009, but forget the technology for a moment. Shortly after the immense crash, a former insurance CEO and veteran of multiple board positions talked about how many financial executives assumed they could move into other areas, such as banks entering the complex balancing required in managing complex derivatives without ever experiencing how the practice worked.

Look back historically and there is a definite pattern of a financial catastrophe every 10 to 15 years in the U.S. It happens for a variety of reasons: a turnover of management to newer people who didn’t experience multiple examples of implosion; the desire to game a system and make massive profits; complexity of how systems work; and the myriad unknowns that constantly swarm like a herd of mutant black swans.

Gensler is concerned that there could be an overreliance on a single software approach because many companies could all rely on the same thing that one day goes wrong.

But AI complicates the picture even more. In some programs, the developers are never certain of how they make the choices and associations they do. How do you debug a program when you can’t necessarily see what it actually did or how it did it?

This multiplies the problem of executives making bets on processes they don’t understand. If errors and tragic mistakes propagating at the speed of people are dangerous, imagine the involvement of machines making bad things happen faster than anyone could respond to.

Anything can be a problem, of course, if it gets out of control. However, if a truck could take off downhill at stunning speed, you want the designers to have engineered in a walloping set of brakes.