Whitepapers

Using Cloud Computing to Reduce Simulation Turnaround Times and Increase Simulation Throughput

by Shu Yang, IMMI and Arend Dittmer, Penguin Computing

FEA simulation throughput directly impacts productivity in any engineering organization that uses computational simulations in their design workflows. In the typically iterative design process, higher simulation throughput and shorter turnaround times allow for exploring more design parameters which in turn results in product designs that are closer to the elusive ‘optimal’ solution. On the other hand, the limited availability of computational resources often limits simulation throughput and increases the turnaround time for simulations resulting in less optimal solutions or delayed schedules. Offering scalability and a pay-as-you go payment model cloud computing promises a way out of this dilemma. This paper provides an overview of Penguin Computing’s public cloud infrastructure Penguin on Demand (POD) including a discussion of POD’s security model. The paper then discusses how IMMI, a provider of advanced safety systems reduced job turnaround time and increased job throughput of LS-DYNA simulations, using a hybrid model of in-house compute resources and Penguin Computing’s public cloud offering Penguin Computing on Demand (POD). Specific examples such as design of IMMI’s FlexSeat, a three point belted seat for school buses, frontal crashing simulation of fire trucks as well as respective benchmark data will be provided.

Download


Optimizing Linux Clusters for ANSYS 11

by Joshua Bernstein and Arend Dittmer

In many organizations design engineers perform FEA (Finite Element Analysis) based simulations on personal desktop systems. Even though increasing hardware performance has enabled the solution of complex problems on desktop systems, this approach has limitations. Interactive design work on the desktop is interrupted by compute intensive simulation runs, negatively affecting productivity. Moreover this approach requires high-powered desktop systems that are not shared with other users and therefore only fully utilized for relatively short periods of time. An approach, where simple simulations are run on desktop systems and more complex problems are solved on shared 'back-end' compute systems is more efficient. Due to their excellent price/performance ratio, Linux based clusters of commodity systems have become the dominating platform for these 'back-end' computations. While Linux-based clusters are a cost effective way to address the always increasing demand for compute cycles, the concept of achieving high performance through interconnected systems introduces performance and manageability challenges. This paper introduces how the choice of a cluster architecture and the selection of hardware components can impact cluster manageability and ANSYS application performance.

Download


Best Practices in Design and Selection of Linux Clusters for Life Sciences Computing

by Stu Jackson

The design and selection of a Linux cluster for life sciences use is a complex activity that many organizations find daunting. However, using a systematic approach to evaluating the various offerings currently on the market can help new life sciences cluster users as well as those on their second or third cluster avoid selecting an inadequate solution that is not well-suited for the unique needs of life sciences. This paper outlines a best practices approach to the selection process focusing on the following areas: deployment technology, manageability, and infrastructure as well as other issues specific to life sciences.

Download


MCAE Applications and Scyld ClusterWare: Maximum Throughput, Minimal Overhead

by Arend Dittmer

No matter which of the diverse areas of the manufacturing industry you look at, information technology (IT) managers struggle to balance their organization's needs for superior designs, improved product quality and shorter time to market - all within a limited budget. However, commodity based Linux clusters are an efficient means for IT departments in manufacturing organizations to fulfill the ever increasing demand for compute cycles. With some types of clusters, though, operational, performance and usability challenges associated with traditional Linux clusters can diminish their attractive price/performance value proposition. An innovative new Linux cluster architecture can help manufacturing organizations maximize the return on investment (ROI) of the Linux clusters, overcoming the problems related to traditional approaches.

Download


Scyld ClusterWare™: An Innovative Architecture For Maximizing Return On Investment In Linux Clustering

by Donald Becker and Bob Monkman

Enterprises require commercial grade high performance computing (HPC) that scales on-demand in order to adapt to ever changing workload requirements and provide optimal system utilization. These needs in turn have driven many useful innovations.

However, these improvements were all based on the fundamental assumption that a cluster or grid configuration must be provisioned as a static, disk-based, full operating system installation on every single server. This approach only masks the complexity from view by adding a second layer of code without removing the underlying problem. This actually magnifies the operating costs of managing and maintaining large pools of servers.

Many organizations assumed these constraints were inherent within HPC and chose to either live with less than optimal return on investment (ROI) or avoided HPC altogether. Instead, rethinking these fundamental concepts can yield surprising results that can eliminate the very complexities many software .solutions. strive to merely camouflage. Scyld ClusterWare, for example, turns this flawed assumption on its head and gives an elegantly simple and powerful new paradigm of virtualized clustered computing.

Download


BioComputing with Scyld: Cheaper, Faster, Better
How Scyld Beowulf™ Cluster Computing Breaks Life Sciences' Computing Barriers

by Yannick Pouliot, Ph.D., M.B.A

All across the life sciences, researchers are hitting a computational wall. Powerful algorithms and software applications for high-speed DNA sequencing and genotyping, ultra-high throughput compound screening and MALDI-TOF peptide identification truly enables researchers to address complex and difficult problems as never before by generating very comprehensive experiment data sets. However, the required amount of computing power is often impractically large and therefore expensive.

In this white paper, Biocomputational Scientist, Yannick Pouliot Ph.D, M.B.A describes how Linux clusters driven by Scyld ClusterWare clustering software dramatically decrease the run time of complex calculations and provide a compelling alternative to supercomputers and large SMP environments for running complex simulations.

Download