Networking for AI: Building the foundation for real-time intelligence

To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution relied on a platform where tournament staff could access data visualizations to help make operational decisions. This dashboard, which used high-performance networking and private cloud environmentaggregated and generalized insights from various data sources in real time.

It was a look at what AI-ready networks look like at scale—a real-life stress test that will impact everything from event management to enterprise operations. While models and data preparation receive the lion's share of boardroom attention and media hype, net “This is the critical third step in successful AI adoption,” explains John Green, CTO, HPE Networking. “Disabled AI won't do much for you; you need a way to input and retrieve data into it for both training and output,” he says.

As enterprises move toward distributed, real-time AI applications, the networks of tomorrow will need to analyze even larger volumes of information at ever-brighter speeds. What happened at Bethpage Black represents a lesson learned across all industries: Output-ready networks are critical to turning the promise of AI into real-world performance.

Preparing the Network for AI Inferences

More than half of organizations are still struggling to get their data pipelines up and running. In a recent study by H.P.E. interdisciplinary research of 1,775 IT executives, 45% said they could capture and access data in real time to drive innovation. This is a noticeable change compared to last year's figures (total 7% reported such capabilities will be available in 2024), but work remains to be done to link data collection to real-time decision making.

This network could be the key to further closing this gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are designed to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they are not designed to dynamically move large amounts of data required by AI workloads. In particular, inference depends on moving huge data sets between multiple GPUs with supercomputer-level precision.

“With a standard, out-of-the-box enterprise network, you can move quickly and freely,” says Green. “Few people will notice that the email platform is half a second slower than it could be. But with AI transaction processing, all the work is limited to the last calculation being performed. So it really becomes noticeable if you have any losses or congestion.”

Therefore, networks built for AI must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized hardware, and adaptability at scale. One of these differences is the distributed nature of AI, which impacts the smooth flow of data.

The Ryder Cup was a powerful demonstration of this new class of networking technology in action. During the event, a connected analytics center was created to collect data on ticket scanning, weather forecasts, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. In addition, 67 artificial intelligence-enabled cameras were installed along the entire route. Input data was analyzed using a live intelligence dashboard and provided staff with an immediate view of activity in the area.

Leave a Comment