Only Penguin was able to deliver a true turnkey solution. All the other companies would handle only the hardware installation; when it came to installing applications and providing a complete solution, they wanted to hand us off.
Computer Systems Engineer
Center for Applied Biomechanics
Penguin Computing has built and delivered hundreds of integrated, turnkey Linux HPC clusters ranging from departmental clusters to Top 100 supercomputers. We know how to take the pain out of configuring, deploying and managing HPC clusters:
- Penguin will work with you through the entire deployment cycle. We will benchmark your applications, help with the selection of components, integrate the cluster and support your production roll-out.
- We don’t follow a one-vendor-fits-all selling strategy and with our extensive ecosystem of hardware partners, we only offer best-of-breed components.
- We understand that cluster management can be a daunting task so we developed our own HPC cluster management software designed for ease-of-use.
- Penguin offers all components and services required for deploying an HPC cluster – this frees you from managing multiple sources and gives you a single point of support.
To find out more about our cluster components scroll down or skip to a section:
Penguin Computing’s Altus and Relion server families are based on the latest generation of processors from AMD (Altus) and Intel (Relion) and offer a variety of platform choices for high-density, high-performance HPC clusters. Servers in standard 1U and 2U rack-mount form factors provide multiple expansion slots and can accommodate processors running at high clock speeds. Double-density, twin form factors can accommodate up to four nodes in a 2U rack-mount form factor for data centers that can deliver sufficient power and cooling capacity.
Penguin offers a variety of storage solutions for HPC clusters ranging in size from small-scale clusters for a workgroup to large supercomputing deployments with hundreds of nodes. For smaller-scale clusters, a network-attached storage solution is typically the best fit; we offer storage servers and enclosures that support a variety of form factors and storage capacities. For large-scale deployments that require scalability and high I/O performance in a shared name space, we offer distributed high-performance storage solutions such as Lustre or Panasas.
Choosing an appropriate interconnect is essential to maximizing HPC system efficiency. The performance of distributed HPC applications often depends on the performance characteristics of the interconnect fabric used for node-to-node communication. We often choose high-bandwidth and low-latency Infiniband as interconnect technology for HPC clusters running fine-grained distributed applications. Penguin offers Infiniband fabric solutions from Mellanox and QLogic.
For applications that are less latency sensitive but require high-bandwidth, 10 Gigabit Ethernet (10GigE) provides a 10x bandwidth boost over the ubiquitous Gigabit Ethernet and preserves interoperability with existing Ethernet networks. Use cases for 10GigE include congestion elimination on oversubscribed uplinks, storage server attachment and messaging networks for high-frequency trading. Penguin Computing offers 10GbE switches from the industry-leading manufacturers Arista, Cisco, Force10, HP and Gnodal. For 10GbE host adapters Penguin offers solutions from Chelsio and Solarflare.
Scyld ClusterWare is an HPC cluster management solution fully compatible with the Linux distributions RedHat Enterprise Linux and CentOS. We designed Scyld ClusterWare to make the deployment and management of a Linux cluster as easy as the deployment and management of a single system. Scyld ClusterWare makes it possible to leverage the superior price/performance ratio of Linux on commodity hardware without the pain of individually managing a multitude of systems.
Scyld Insight is a web-service-based cluster management and monitoring GUI. With Scyld Insight, cluster administrators can monitor system metrics that provide insight into a cluster’s health, activity and utilization in real time. They can configure Scyld Clusterware and quickly analyze any set, without needing a high level of HPC cluster expertise.
High-density, multi-socket HPC systems can consume close to 1kW per rack unit, particularly when equipped with GPUs. With these configurations, power consumption of a fully loaded standard rack is typically in the 30 to 40 kilowatt range. This level of current per circuit and power density per square foot requires specialized power delivery and heat dissipation solutions. In partnership with power distribution specialists APC and Servertech, Penguin Computing can help define the right power and cooling solution for your data center.