OSS to Incorporate NVIDIA GPUs with NVLink, Marvell ThunderX2 Arm-based Servers
November 20, 2019
News
The SCA8000 includes eight NVIDIA V100 Tensor Core GPUs interconnected with NVIDIA NVLink providing 300GB/s of peer-to-peer GPU bandwidth.
One Stop Systems, a provider of specialized computing solutions for mission critical edge applications, announced the release of SCA8000 compute acceleration expansion platform for use with Marvell ThunderX2 Arm based servers, in a press release.
The SCA8000 includes eight NVIDIA V100 Tensor Core GPUs interconnected with NVIDIA NVLink providing 300GB/s of peer-to-peer GPU bandwidth. The platform allows up to 64GB/s of uplink bandwidth between the GPUs and Arm based servers using up to four payment card industry (PCI) Express Gen3 x16 host interconnects.
It is a 3U 8-way SXM2 expansion chassis designed for installation in a standard 19-inch rack. It supports eight passively cooled SXM2 V100 GPU modules and has an additional 4 PCIe x16 slots to support options, such as 100/200Gb Ethernet, Infiniband, or other IO and data acquisition cards. It can connect to an Arm based host server with up to four PCI-SIG PCIe Cable 3.0 compliant links at a distance of up to 100m.
NVIDIA announced it is making its full stack of AI and HPC software available to the Arm ecosystem, in July. The stack includes all NVIDIA CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools. The SCA8000 is the industry’s only NVIDIA NVLink based solution available that leverages this NVIDIA CUDA support for the Arm processor architecture, per the release.
The solution promotes scalability of multi-GPU compute performance with power-optimized host processing for AI on the Fly edge applications. AI on the Fly edge appliances based on OSS technology bring datacenter AI capabilities to edge applications in military, aerospace, medical, and autonomous driving.
You can view a demonstration of SCA8000 at SuperComputing 2019 in Denver.