As Artificial Intelligence (AI) models continue to grow, data centers are requiring increasing amounts of available compute and memory to efficiently execute training and inference.
The UALink Consortium was formed to develop technical specifications that facilitate direct load, store, and atomic operations between AI Accelerators (i.e. GPUs). We are currently developing a new industry standard, working to establish an optimized scale-up ecosystem, and investing in an open solution that enables advanced models across multiple AI accelerators.