HOME > PRODUCTS > OCP HPC & AI SYSTEMS

Scale Up with Technology Designed For HPC and AI on OCP Hardware

More room for high-value technology, lower total cost of ownership, more innovation

Don’t Compromise, Meet Technical and Business Priorities

Leading-edge organizations choose Open Compute Project (OCP)-based infrastructure so they can scale out cost-effectively. There is a strong argument for using OCP-based hardware in a data center; it is less expensive to buy and to maintain, reduces points of failure, is designed for more efficient power management, and significantly reduces security issues.

But, for teams trying to perform complex high-performance computing (HPC) or artificial intelligence (AI), OCP has always been a challenge. Until now, OCP has been largely focused on hyper-scale cloud solution providers. Few vendors have the skills and experience to build server or rack designs that meet the complex, software-driven needs of HPC and AI.

open compute project ocp platinum member penguin computing

Fortunately, Platinum OCP member Penguin Computing—recognized for our OCP expertise as the HPC lead on the OCP Incubation Committee—has the solution, the Penguin Computing® Tundra® Extreme Scale platform. The Tundra system combines the capital expenses (CAPEX) and operating expense (OPEX) savings of OCP-based hardware with today’s technologies for HPC and AI.

Tundra ES for HPC

Thanks to two decades of experience in how software is orchestrated, deployed, managed, and optimized for different compute architectures, Penguin Computing was able to create a complete HPC system that is dense enough for the most challenging projects and flexible enough for virtually any HPC computing architecture—while also taking advantage of OCP’s inherent ease of maintenance and low total cost of ownership (TCO).

Now in its second generation, the Tundra platform:

  • Supports an exceptionally diverse array of technologies, including graphics processing unit (GPU)-accelerated computing on the latest NVIDIA® Tesla® graphics accelerators
  • Includes server formats from 1OU-4OU with a capacity for over 100 nodes per rack
  • Comes with the latest AMD EPYCTM processors or Intel® Xeon® Scalable processors, high-speed software-defined networking (SDN), and localized storage for flexibility and performance

If you’re interested in a hybrid HPC approach or need to enable remote access, Tundra technology is also available via the cloud through the Penguin Computing® On-DemandTM (PODTM) platform.

Learn More

Tundra ES for AI

To meet the increasing demand for AI training and inference, the Penguin Computing AI Practice has created a reference design for the Tundra ES platform that supports the latest developments in AI while taking advantage of OCP’s low TCO and allows massive scale up.

This production-quality design was informed by real life experience working with some of the largest AI clusters in the world about how AI frameworks are orchestrated, deployed and optimized for different compute architectures.

That’s why it was optimized to support the technologies required for inference workloads, is dense enough to allow significantly more high-value technology per rack than traditional solutions, and is suitable for a more diverse compute array of architectures than most OCP designs.

The Tundra platform:

  • Supports an exceptionally diverse array of technologies, including the NVIDIA T4 with Turning Tensor Cores for inference
  • Includes server formats from 1OU-4OU with a capacity for over 100 nodes per rack
  • Comes with the latest AMD EPYCTM processors or Intel® Xeon® Scalable processors, high-speed software-defined networking (SDN), and localized storage for flexibility and performance

Learn More

Related Offerings

Related Materials

Brief: 8th Gen Intel® Core™ Processor with Radeon™

Brief: 8th Gen Intel® Core™ Processor with Radeon™

Brief: 8th Gen Intel® Core™ Processor with Radeon™

Brief: 8th Gen Intel® Core™ Processor with Radeon™

Brief: 8th Gen Intel® Core™ Processor with Radeon™

Brief: 8th Gen Intel® Core™ Processor with Radeon™