GPU Computing Platforms

  • NEW: 6x NVIDIA® NVLINKTM interconnects per GPU, 25GB/sec bidirectional
  • NVIDIA® Tesla® graphics processing units (GPUs) offload compute-intensive function in code from CPUs to enable orders-of-magnitude faster processing times
  • Available in both SXM2 and PCle based GPU solutions and 19″ EIA and OCP
  • Available in Intel and AMD platform solutions
  • Linux support “out of the box”
  • Consulting, training, and code migration services available
nvidia tesla penguin computing deep learning gpu graphical processing unit accelerator
Technical Specs

1U

Processor

PCIe Slots

GPU(s) Supported

Intel Xeon E5-2600 v3 / v4 Series
3x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP)
Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe Gen3 (GPU), 2x PCIe Gen3 x8 (LP)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe x16 (GPU), 2x PCIe Gen3 x8 (LP), Flexible PCIe Topology
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
1x PCIe Gen3 x16 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla M4, Tesla P4

2U

Intel Xeon E5-2600 v3 / v4 Series
3x PCIe Gen3 x16 (GPU), 2x PCIe x8 (FHHL), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
4x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80
Intel Xeon E5-2600 v3 / v4 Series
8x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x8 (LP), 1x PCIe Gen3 x8 (Proprietary Mezz)
Tesla P100-16GB, Tesla P100-12GB, Tesla P40, Tesla M40-24GB, Tesla K80

OpenPOWER POWER8

OpenPOWER POWER8

PCIe gen3 expansion slots for 2 NVIDIA Tesla K80 or M40 GPUs and for high speed network interfaces

Tesla K80, Tesla M40

Roles & Features

1U

Role

Special Features

GPU computing- Tesla/Xeon Phi support

Up to 3 double-width GPUs

GPU computing- Tesla/Xeon Phi support

Up to 4 double-width GPUs

OCP Tundra GPU Tesla/Xeon Phi support

Up to 4 double-width GPUs and Flexible PCIe Topology

OCP Tundra GPU Tesla support

Supports NVIDIA M4 GPU

2U

GPU computing

Up to 3 double width GPUs

GPU computing

Up to 4 general purpose GPUs

GPU computing- Tesla/Xeon Phi support

Up to 8 double-width GPUs

OpenPOWER GPU computing- Tesla SXM2 socket support and NVLink system interconnect

4x SXM2 socket featuring NVLink for latest NVIDIA Tesla p100 “Pascal” co-processors

OpenPOWER GPU computing- Tesla support

PCI-E gen3 expansion slots for 2 NVIDIA Tesla K80 or M40 GPUs and for high speed network interfaces

NVIDIA DGX-1 Server

Building a platform for deep learning goes well beyond selecting a server and GPUs. A commitment to implementing AI in your business involves carefully selecting and integrating complex software with hardware. NVIDIA® DGX-1TM fast-tracks your initiative with a solution that works right out of the box, so you can gain insights in hours instead of weeks or months.

nvidia-dgx-1

WHAT IS GPU ACCELERATED COMPUTING?

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications.

Pioneered in 2007 by NVIDIA®, GPU accelerators now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world. GPUs are accelerating applications in platforms ranging from cars, to mobile phones and tablets, to drones and robots.

nvidia

Accelerated Computing Platforms

  • NEW: Platform options for latest NVIDIA Tesla P100 “Pascal” accelerators, including NVLink support!
  • NVIDIA® Tesla® graphics processing units (GPUs) offload compute-intensive function in code from CPUs to enable orders-of-magnitude faster processing times
  • Available in custom server configurations
  • Linux support ‘out of the box
  • Consulting, training, and code migration services available