Artificial Intelligence Requires New Data Center Infrastructure

AI's digital transformation will need a quantum leap in processing capacity.

The data center infrastructure build-out in recent years to support the explosion of cloud computing, video streaming and 5G networks will not be sufficient to support the next-level digital transformation that has begun in earnest with the widespread adoption of artificial intelligence.

In fact, AI will require a different cloud-computing framework for its digital infrastructure—one that will redefine current data center networks, in terms of where certain data center clusters are located and what specific functionality these facilities have.

ChatGPT, the AI verbal synthesizer that now has more than 1M users and $10B in backing from Microsoft—let’s call it Microsoft Word Salad and see if it puts up its dukes—is only the beginning of the rise of the machine-learning-schools-humans apps that are heading our way.

In November, Amazon Web Services, the global leader in cloud computing, formed a partnership with Stability AI; Google has a ChatGPT-type system called Lamda—the search-engine giant has brought back founders Larry Page and Sergey Brin to guide its release, according to a report in Bloomberg.

Last month, Meta announced that it was pausing its data center build-outs around the world to reconfigure these server farms to optimize them for the data-processing requirements of artificial intelligence (which is arriving much faster than the metaverse), GlobeSt. reported.

The data-crunching needs of AI platforms are so vast, OpenAI—creator of ChatGPT, which it introduced in November—would not be able to keep running the brainy word-meister without hitching a ride on Microsoft’s soon-to-be upgraded Azure cloud platform.

ChatGPT could probably do a much better job of explaining this, but it turns out that the micro-processing “brain” of artificial intelligence platforms—in this case the data center infrastructure that will support this digital transformation—will, like human brains, be organized into two hemispheres, or lobes. And yes, one lobe will need to be much stronger than the other.

One hemisphere of AI digital infrastructure will service what is being called a “training” lobe: the computational firepower needed to crunch up to 300B data points to create the word salad that ChatGPT generates. In ChatGPT, that’s every pixel that has ever appeared on the Internet since Al Gore invented it.

The training lobe ingests data points and reorganizes them in a model—think of the synapses of your brain. It’s a reiterative process in which the digital entity continues to refine its “understanding,” basically teaching itself to absorb a universe of information and to communicate the essence of that knowledge in precise human syntax.

The training lobe requires massive computing power and the most advanced GPU semiconductors, but little of the connectivity that now is the imperative at data center clusters supporting cloud computing services and 5G networks.

The infrastructure focused on “training” each AI platform will have a voracious appetite for power, mandating the location of data centers near gigawatts of renewable energy and the installation of new liquid-based cooling systems as well as redesigned backup power and generator systems, among other new design features.

The other hemisphere of an AI platform’s brain, the digital infrastructure for higher functions—known as the “inference” mode—supports interactive “generative” platforms that field your queries, tap into the modeled database and respond to you in a convincing human syntax seconds after you enter your questions or instructions.

Today’s hyper-connected data center networks—like the largest such cluster in North America, Northern Virginia’s “Data Center Alley,” which also has the nation’s most extensive fiber-optic network—can be adapted to meet the next-level connectivity needs of the “inference” lobe of the AI brain, but these facilities also will need upgrades for the enormous processing capacity that will be required—and they’ll need to be closer to power substations.

The largest cloud computing providers are offering data-crunching power to AI startups that are hungry for it because the startups have the potential to become long-term customers. One VC player that invests in AI likened it to a “proxy war” among superpowers for AI hegemony.

“There’s somewhat of a proxy war going on between the big cloud companies,” Matt McIlwain, managing director at Seattle’s Madrona Venture Group LLC, told Bloomberg. “They are really the only ones that can afford to build the really big [AI platforms] with gazillions of parameters.”

The emerging AI chat bots are “scary good,” as Elon Musk put it, but they’re not quite sentient beings that can match the million years of evolution that produced the precise sequence of billions of synapses firing in the same millisecond in your frontal lobe that tells you to smell the coffee before it’s too late. Not yet.