GPU Computing Platforms

  • NEW: 6x NVIDIA® NVLINKTM interconnects per GPU, 25GB/sec bidirectional
  • NVIDIA® Tesla® graphics processing units (GPUs) offload compute-intensive function in code from CPUs to enable orders-of-magnitude faster processing times
  • Available in both SXM2 and PCle based GPU solutions and 19″ EIA and OCP
  • Available in Intel and AMD platform solutions
  • Linux support “out of the box”
  • Consulting, training, and code migration services available
nvidia tesla penguin computing deep learning gpu graphical processing unit accelerator

Relion 19-inch Servers

1U

Processor

PCIe Slots

GPU(s) Supported

Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
1x PCIe Gen3 x16 (LP), 2x PCIe Gen3 x8 (LP), 2x PCIe Gen3 x16 (OCP Mezz)
NULL
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
4x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP-MD2)
Tesla P100-PCIe, Tesla V100-PCIe
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
4x Nvidia Tesla SXM2 (GPU), 2x PCIe Gen3 x8 (LP)
Tesla P100-16GB-SXM2
Intel Xeon E5-2600 v3 / v4 Series
3x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP)
Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe Gen3 (GPU), 2x PCIe Gen3 x8 (LP)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe x16 (GPU), 2x PCIe Gen3 x8 (LP), Flexible PCIe Topology
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
1x PCIe Gen3 x16 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla M4, Tesla P4

2U

Processor

PCIe Slots

GPU(s) Supported

Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
8x PCIe Gen3 x8 (LP) or 3x PCIe Gen3 x16 (LP), 2x PCIe Gen3 x8 (LP)
NULL
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
2x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x8 (LP), 2x OCP Mezz
Tesla P100, Tesla V100
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
8x PCIe Gen3 x8 (LP), 1x OCP Mezz
NULL
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
1x PCIe Gen3 x16 (LP), 1x PCIe Gen3 x8 (LP), 1x OCP Mezz
Tesla M4, Tesla P4
Intel Xeon E5-2600 v3 / v4 Series
3x PCIe Gen3 x16 (GPU), 2x PCIe x8 (FHHL), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
8x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80

OpenPOWER POWER8

OpenPOWER POWER8

PCIe gen3 expansion slots for 2 NVIDIA Tesla K80 or M40 GPUs and for high speed network interfaces

Tesla K80, Tesla M40

4U

Processor

PCIe Slots

GPU(s) Supported

Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
8x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP)
Tesla P100, Tesla V100
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
8x NVIDIA SXM2 (GPU), 2x PCIe Gen3 x16 (LP)
Tesla P100-SXM2, Tesla V100-SXM2

Relion OCP Servers

1OU

Processor

PCIe Slots

Memory Capacity

Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
4x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP)
Up to 2TB (16x DIMMs)
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
4x NVIDIA SXM2 (GPU), 2x PCIe Gen3 x16 (LP)
Up to 2TB (16x DIMMs)
Intel Xeon Skylake-SP / Skylake-SP with Intel Omni-Path
1x PCIe Gen3 x16 (LP), 1x PCIe Gen3 x8 (LP), 1x OCP Mezz
Up to 2TB (16x DIMMs)

2OU

Processor

PCIe Slots

Memory Capacity

NVIDIA DGX-1 Server

Building a platform for deep learning goes well beyond selecting a server and GPUs. A commitment to implementing AI in your business involves carefully selecting and integrating complex software with hardware. NVIDIA® DGX-1TM fast-tracks your initiative with a solution that works right out of the box, so you can gain insights in hours instead of weeks or months.

nvidia-dgx-1