Rack density in data centers has increased rapidly in recent years. Operators are building more computing power into every server rack to meet the needs of AI and other high performance computing applications. This means that each rack requires more kilowatts of power and ultimately generates more heat. Cooling infrastructure is struggling to keep up.
“Rack densities have increased from 6 kilowatts per rack eight years ago to the point where racks now ship at 270 kW,” says David HolmesGlobal Industry CTO Dell Technologies. “Next year 480 kW will be ready, and we will have megawatt racks in two years.”
The Swiss company Corintis is developing a technology called microfluidicsin which water or other coolant transmitted directly to certain parts of the chip to prevent overheating. In a recent test with Microsoft, servers management of company teams video conference the software recorded the rate of heat dissipation three times more effective like other existing cooling methods. Compared to traditional air cooling, microfluidics has reduced chip temperatures by more than 80 percent.
Improving chip performance using microfluidics
The lower temperature of the chip allows it to execute instructions faster, increasing its performance. Chips operating at lower temperatures are also more energy efficient and have lower failure rates. In addition, the temperature of the air used for cooling can be increased, making the data center more energy efficient by reducing the need for coolers and reducing fluid consumption.
The amount of water required to cool the chip can be significantly reduced by directing the flow of liquid to the places on the chip that generate the most heat. Van Erp noted that the current industry standard is approximately 1.5 liters per minute per kilowatt of power. As the chips approach 10kW of power, this will soon mean 15 liters per minute to cool a single chip – a figure that will draw the ire of the community. I'm worried about the consequences any super-sized “artificial intelligence factories” planned on their territory, which could contain a million or more GPUs.
“We need chip-specific optimized liquid coolingto make sure every drop of liquid goes to the right place,” says Remco van Erpco-founder and CEO of the company Corinthians.
Corintis founders Sam Harrison [left] and Remco van Erp contain a cold plate and a microfluidic core, respectively.Corinthians
Simulation and optimization software developed by Corintis is used to design a network of microscopically small channels on cold plates. Like the arteries, veins and capillaries in the body's circulatory system, the ideal cold plate design for each type of chip is a complex network of precisely shaped channels.
Corintis has scaled up additive manufacturing the possibility of mass production of copper parts with channels as thick as a human hair, about 70 micrometers. Its cold plate technology is compatible with modern liquid cooling systems.
The company believes this approach can improve cold plate results by at least 25 percent. Corinthis believes that by working directly with chip manufacturers to create channels in the silicon itself, it could ultimately achieve a tenfold increase in cooling efficiency.
Development of liquid cooling for AI chips
Liquid cooling is far from new. IBM 360 mainframefor example, more than half a century ago it was cooled with water. Modern liquid cooling has largely been a competition between submersible systems– in which racks, and sometimes entire rows of equipment, are immersed in coolant – and systems with direct connection to the chip– in which coolant is supplied to a cold plate pressed against the chip.
Immersion cooling is not yet ready for prime time. And while direct cooling is widely used to cool GPUs, it only cools the surface of the chip.
“Liquid cooling in its current form is a generic solution based on simplified designs that are not tailored to the chip, which prevents good heat transfer,” says van Erp. “The optimal design for each chip is a complex network of precisely shaped microchannels tailored to the chip and directing coolant to the most critical areas.”
Corintis is already working with chip makers to improve the design. Chip makers use the company's thermal emulation platform to program heat dissipation on test silicon chips at millimeter-scale resolution, then determine the resulting temperature on the chip once the chosen cooling method is set. In other words, Corinthis acts as a bridge between chip design And cooling system design, allowing chip designers to create future chips for artificial intelligence applications with superior thermal performance.
The next step is to move from the bridge between the cooling channel and chip design to the unification of these two processes. “Modern chips and the cooling system are currently two separate elements, the interface between which is one of the main bottlenecks in heat transfer,” says van Erp.
To improve cooling efficiency by a factor of ten, Corintis is betting on a future in which cooling is tightly coupled as an integral part of the chip itself – with microfluidic cooling channels etched directly inside the chip. microprocessor bag, and not on cold plates around the perimeter.
Corintis has produced more than 10,000 copper cold plates and is increasing its production capacity to reach one million copper cold plates by the end of 2026. The company has also developed a prototype line in Switzerland where cooling channels are created directly inside the chips, rather than on a cold plate. These are planned to be small batches to demonstrate basic concepts, which will then be transferred to chip and cold wafer manufacturers.
Corintis announced these expansion plans immediately after the Microsoft Teams tests were published. In addition, the company is opening US offices to serve US customers and an engineering office in Munich. Germany. In addition, the company also announced the completion $24 million Series A funding round led by BlueYard Capital and other investors.
Articles from your site
Related articles on the Internet






