ブリーフィング
Validating High-Speed Ethernet for Next-Generation AI Networking
The scale of artificial intelligence (AI) applications is skyrocketing with the number of model parameters growing at a blistering 1000x pace every 3 years. The growing complexity and size of AI applications is driving a need for increasing numbers of AI accelerators, which in turn dictate the type and scale of the networking fabric required to interconnect the accelerators with each other and compute and memory resources.
As AI applications move from small apps with hundreds of AI accelerators to large apps that require tens of thousands of AI accelerators, the networking architectures grow more complex. As we grow to over ten thousand accelerators, a data center scale architecture is required with a routable fabric and AI spine.
Page 1 of 0