- Low latency networks are becoming vital for faster and more efficient AI inference
- AMD Solarflare X4 Adapters Extend Proven Trading Technology in Real-Time AI Environments
- Consistent, microsecond-accurate performance can improve the reliability of data-driven edge applications.
AMD made public Solarflare X4 Ethernet AdaptersThe latest generation of ultra-low latency network cards.
While these adapters were designed with high-frequency trading in mind, their capabilities mean they can also play a role in AI inference workloads that require fast data movement and predictable response times.
Low latency is becoming increasingly critical for AI inference, as any latency can limit performance or accuracy.
Programmable I/O Trimming
AI output depends on the rapid movement of data between computers, and network speed obviously plays a major role in how quickly results appear.
The Solarflare X4 series is based on technology that has long served the financial sector. It includes two main models: the X4522, which supports two SFP56 ports up to 50GbE each, and the X4542, which uses two QSFP56 ports for speeds up to 100GbE.
Both have a mode known as Cut Through Programmed Input Output, which begins transmitting packets before they fully cross the PCIe bus, reducing processing latency.
While AMD adapters don't match the raw speed of 400G or 800G networking hardware, they do have the advantage of supporting sub-microsecond latency with high stability.
In addition to being attractive financial systems, they are also useful for emerging AI workloads that require real-time inference at the edge.
The adapters work in conjunction with AMD Onload software, which can offload data movement tasks from the CPU, freeing up processing power for inference, compute, analysis, and control tasks.
The benefit of reduced latency can lead to faster and more reliable response of AI applications running in autonomous systems, smart manufacturing, or content delivery environments.
The Solarflare X4 series may be designed for specialized markets, but networking equipment optimized for speed and predictability can benefit a range of data-centric industries.
This move by AMD could be one of the latest strategic releases to combine decades of experience in low-latency networking with growing demand for ultra-efficient AI inference.
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok. for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.