Penguin in the News

Penguin Computing to make Open Compute servers

March 21st, 2013

Penguin Computing to make Open Compute servers

"Linux server and cluster maker Penguin Computing is a member of the Open Compute Project started by Facebook to create open source data center gear, and now it is an official "solution provider".

Penguin Pushes Envelope on Compute Density

March 21st, 2013

Penguin Pushes Envelope on Compute Density

“In the midst of the GPU Technology Conference this week, Penguin Computing served up a new high-power system, heavy on the GPU/coprocessor side, to meet the needs of HPC customers with heavy processing needs..”

ARM Muscles In on Intel's Dominance in Datacenters

January 28th, 2013

ARM Muscles In on Intel's Dominance in Datacenters

“"Among the customers that we serve, both HPC as well as large web farms, one of the common themes that comes up is that in the world of semiconductors today, it's all about performance per watt," says Charles Wuischpard, CEO of Penguin Computing.”

AMD Rolls Out Open Compute Servers for Wall Street

January 16th, 2013

AMD Rolls Out Open Compute Servers for Wall Street

“We have eagerly awaited the AMD Open 3.0 platform as it brings the benefits and spirit of the Open Compute Project to a much wider set of customers,” said Charlie Wuischpard, CEO of Penguin Computing. “As we deliver a new line of Penguin servers based on AMD Open 3.0 and AMD Opteron 6300 processors, our high performance computing, cloud, and enterprise customers can now deploy application specific systems using the same core building blocks that are cost, performance, and energy optimized and perhaps most important, consistent. We think this initiative eliminates unnecessary complexity and provides new levels of supportability and reliability to the modern data center.”

AMD Honors Top Channel Partners Helping Drive Success in North American Commercial Market

August 1st, 2012

AMD Honors Top Channel Partners Helping Drive Success in North American Commercial Market

AMD (NYSE: AMD) announced the recipients of its North American Commercial Channel Partner Awards at the annual AMD North American Executive Commercial Channel Summit in San Francisco on July 25, 2012. During the two-day event, AMD’s valued channel partners came together to share insights on topics that will help define the future of the commercial channel.

HPC Clouds – An Opportunity Not Just for the “Missing Middle”

Digital Manufacturing Report

July 25, 2012

HPC Clouds – An Opportunity Not Just for the “Missing Middle”

The term “missing middle” was coined by the Council on Competitiveness and refers to small and medium-sized manufacturers that are missing out on the benefits of advanced modeling and simulation technologies. Capital and operational expenses associated with high performance computing (HPC) deployments required to run simulation codes are a barrier of entry for these organizations.

Penguin Wins Award, Keeps Wearing Tux

HPC Wire

April 27, 2012

Penguin Computing is flapping its wings, as it has received Intel's Data Center Innovation Award at the Intel Solutions Summit.

A press release describes the award as going to a company that exhibits "… successful deployment and integration of a data center or server solution resulting in superior return on investment for the client." This year, the Penguin on Demand (POD) cloud service gained that recognition from Intel.

The 7 Best Servers for Linux

Server Watch

January 6th, 2012

Another under-the-radar Linux server supplier is Penguin Computing, although it's been in business for more than ten years. The company focuses more on high performance computing (HPC), cloud computing and high-efficiency computing solutions than on standard data center workload fodder. Its line of high-efficiency systems for server farms compare to other server systems on this list. Unfortunately, there's no pricing information on the website. System pricing is by quotation only.

Systems ship with a standard three-year parts warranty. On-site support is an extra charge. You can customize systems with the online configurator page but, again, your system's price is a mystery until you submit the configuration for a quote.

Video: Penguin Computing Talks APU Supercomputing at SC11


insideHPC

November 27th, 2011

In this video, Penguin Computing CTO Phil Pokorny discusses the company’s latest innovation–an APU-based supercomputer deployed at Sandia National Labs. Recorded at SC11 in Seattle.

Whamcloud, Penguin Computing Sign Lustre Support Agreement

HPCwire

November 16th, 2011

DANVILLE, Calif., and FREMONT, Calif, Nov. 15 — Whamcloud, a venture-backed company formed from a worldwide network of high-performance computing (HPC) storage industry veterans, announced today a world-wide Lustre support agreement with Penguin Computing, experts in integrated high performance computing (HPC) solutions. The agreement will allow a wide range of cooperation on Penguin’s HPC cluster solutions and Penguin's on-demand HPC cloud service, known as Penguin Computing On-Demand (POD). Support services will begin immediately.

New Supercomputers Outfitted with Latest AMD Processors

HPCwire

November 2nd, 2011

SUNNYVALE, Calif., Nov. 2 -- AMD (NYSE: AMD) today announced several new installations of advanced research and academic supercomputers will run on a wide range of AMD technology including the upcoming 16-core processor codenamed “Interlagos,” the AMD Fusion Accelerated Processing Unit (APU) and the AMD Opteron™ 6100 Series processor. Included among the latest deployments are Cray Inc. (Nasdaq: CRAY) supercomputers at the university of Edinburgh (HECToR), Oak Ridge National Laboratory (ORNL), University of Stuttgart (HLRS) and Swiss National Supercomputing Centre (CSCS).

Big Things Come in Small Packages

AMD Blog

November 1st, 2011

As the evangelist for AMD Opteron™ processors, it isn’t often that I get to talk about some of the things taking shape on the consumer side of the housewith our Accelerated Processing Units (APUs). But every now and then something comes up that catches my attention.

For example, I recently blogged about running a home server on an APU. But that was a real low-power server with very light workloads; I chose the APU for the power efficiency, and the GPU side of my APU rarely ever comes to life unless I tunnel in via remote console for management.

Vendor Showdown Puts HPC Clouds in Spotlight

HPC in the Cloud

October 3rd, 2011

Most conferences provide an opportunity for event sponsors to get their messages across to attendees in one way or another, at the very least by providing a platform to talk amidst the glow of a PowerPoint presentation. Oftentimes, these overviews address audiences at large—and avoid the “big questions” about potential problems, drawbacks or other points of weakness.

Biotechs Hop on Cloud Nine with Better Resources Gained from New Technology

www.genengnews.com

May 23rd, 2011

Once described as less a technological advance than a new business model, cloud computing offers small biotechs access to big technology and heavy computing power relatively cheaply. Ranging from complete IT infrastructures in the sky to basic data storage, features of cloud computing include resource outsourcing, utility computing, large collections of inexpensive machines, automated resource management, virtualization, and parallel computing.

Opening Sequences for HPC on Demand

www.hpcinthecloud.com

May 11th, 2011

Next generation DNA sequencing has brought a wealth of opportunities in research, pharmaceutical and clinical contexts, but for those who are in the high performance computing space, this particular market is bursting with a different array of opportunities. From specialty clusters dedicated exclusively to crunching the overwhelming amounts of data coming of sequencers (not to mention the storage might to keep it all in check) the biosciences industry is a prime target for vendors of all stripes.

BioIT World 2011 Boston, MA - Day 2

About SOLiD

April 13th, 2011


The morning started with a Plenary Kenote by Bryn Roberts from Roche. His talk was titled "Interacting with Complex Information Landscapes: Integration and Next Generation User Interfaces." He talked about the challenges of Pharma, and focused on integration tools. He asked the bioinformatics audience to be innovative and create solutions beyond what scientists want today. He believes that team decision making will accelerate discovery, and feels technology can assist this essential step for scientific breakthroughs.

Genomes, Clouds, and No Headaches

www.genomeweb.com

April 13th, 2011

Probably the best sound bite from day two of the Bio-It World Expo in Boston was provided by Nicholas Socci, assistant director of the Bioinformatic Core at Memorial Sloan Kettering Cancer Center: “Either the computers are ready for me to use, in the way that I want to use them, or they’re not ready-and those are the only real pros and cons.”

Penguin Computing overclocks Opterons for Wall Street

The Register

April 4th, 2011

Linux server specialist Penguin Computing has jumped into the overclocked server fray with a new Altus server aimed at clock-hungry high frequency stock trading applications.

Novell and Penguin Computing- Partnering to deliver services and solutions to a diverse Linux customer base

Novell PODCast

February 9th, 2011

Dan Dufault, Novell’s Global Director of Partner Marketing, recently sat down with Penguin Computing’s Chief Hardware Architect, Philip Pokorny and the Company’s director of product management for high performance computing, Arend Dittmer to discuss their experience providing solutions to the Linux market as well as Penguin’s partnership with Novell. Specializing 100% on Linux since 1998, Penguin’s areas of expertise include high performance computing, cluster management, as well as on-demand HPC resources.

Podcast: Penguin On Demand is One Cloud That’s All About HPC

Inside HPC

December 7th, 2010

In this podcast, Penguin Computing product manager Arend Dittmer shares his insights into what the company has learned from 18 months of Penguin on Demand. The POD cloud is all about HPC and they built it that way from ground up.

Penguin launches on demand HPC utility

insideHPC.com

August 13th, 2009

This week Penguin Computing announced the launch of a new service called “Penguin On Demand” — POD for short. The service is targeted specifically at the needs of scientific computing users, and Penguin is positioning it against the most successful of the on-demand computing resources available today:

Penguin Adds HPC On-Demand Service

HPCwire.com

August 12th, 2009

Linux cluster maker Penguin Computing hopped on the HPC-in-a-cloud bandwagon this week with the announcement of its HPC on-demand service. Called Penguin On Demand (POD), the service consists of an HPC compute infrastructure whose capacity can be rented on a pay-as-you-go basis or through a monthly subscription.

Penguin Offers Cloud Computing for HPC

Eweek.com

August 11th, 2009

Linux cluster vendor Penguin Computing has created a cloud computing environment aimed at the HPC space. Penguin’s POD service is built on the vendors Intel-powered Linux clusters, high-speed interconnect technologies like InfiniBand, NetApp SANs, Nvidia graphics chips and Penguin’s Scyld ClusterWare management software, all important technologies for highly parallel, memory-intensive HPC applications. Penguin also is not using virtualization technologies on its server clusters, which officials said will improve server and I/O performance.
Penguin Puts High-Performance Computing in the Cloud

www.PCworld.com

August 11th, 2009

IDG News Service —Penguin Computing, which builds high-performance Linux clusters for tasks like weather modelling and product design, is taking its business into the cloud.

Penguin on Demand

Cloud Computing Journal

August 11th, 2009

Penguin Computing is going into the cloud business.

Not your ordinary cloud, mind you, an HPC cloud, called Penguin on Demand (POD), an extension of its usual fare, and a first.

Penguin puts Linux supercomputer in sky

The Register

August 11th, 2009

Hitching a ride on that ubiquitous cloud metaphor, Penguin Computing has unveiled a Linux supercomputer in the sky.

Today, the San Francisco-based outfit announced the debut of what it calls Penguin on Demand - POD, for short - a service that offers remote access to high-performance computing (HPC) Linux clusters. The idea is to provide researchers, engineers, and simulation scientists with the sort of number-crunching power they can't get from something along the lines of Amazon's Elastic Compute Cloud (EC2).

Don Becker On The State Of HPC

Linux Magazine

August 4th, 2009

Linux magazine HPC Editor Douglas Eadline had a chance recently to discuss the current state of HPC clusters with Beowulf pioneer Don Becker, Founder and Chief Technical Officer, Scyld Software (now Part of Penguin Computing). For those that may have come to the HPC party late, Don was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don’s work in parallel and distributed computing began in 1983 at MIT’s Real Time Systems group. He is known throughout the international community of operating system developers for his contributions to networking software and as the driving force behind beowulf.org.

Adaptec Introduces Series 5Z RAID Controllers With First Maintenance Free, Flash-Based Cached Data Protection for On-Demand Cloud Computing Data Centers

www.merinews.com

June 24th, 2009

Innovative, HighPerformance Adaptec Series 5Z Unified Serial(R) RAID Controllers Reduce Data Center Operating Costs, Enhance Data Protection, and Minimize Environmental Hazards.

Scyld announces a new, extensible cluster management console

InsideHPC.com

June 3rd, 2009

To date the Scyld offering at Penguin has focused primarily on cluster operating system and provisioning management through the Scyld ClusterWare solution, but today they announced a new product for your cluster: the Integrated Management Framework (IMF).
Nvidia, Supermicro Tout 'Highest-Perfomance 1U Server'

PCMAG.com

June 1st, 2009

Nvidia and SuperMicro will team up on a 1U server that combines two CPUs and two GPUs, all to be used for computational-intensive algorithms.

Building Market Awareness on a Budget

Utah Business

June 1st, 2009

Marketing programs can be one of the first casualties in a down economy. Such budgetary decisions are usually made with short-term objectives in mind, but can often lead to long-term negative results if not properly managed. Even with limited budgets, there are a number of smart, creative, and low cost methods for continuing to build market awareness, while weathering the economic storm:

Processing Prowess in a Small Package

IT Business Edge

May 31st, 2009

The advantages of adding the parallel capabilities of graphics processors to dual- and quad-core CPUs in advanced server designs is on display this week, but you'll have to go to Taiwan to see it.
NVIDIA Shifts GPU Clusters Into Second Gear

HPCwire.com

May 4th, 2009

GPU-accelerated clusters are moving quickly from the "kick the tires" stage into production systems, and NVIDIA has positioned itself as the principal driver for this emerging high performance computing segment.

Linux Cluster Vendor Penguin Making Its Move

eweek.com

April 27th, 2009

Penguin Computing, which makes Linux-based virtualized computing clusters, is looking to build on its solid financials to expand its business. Penguin is building an on-demand system for HPC customers, and is looking to grow through acquisition. Penguin officials say the recent deals in the infrastructure space, including Oracle buying Sun and Rackable acquiring SGI, give the company opportunities to gain greater traction in the competitive HPC market.

HPC Vendors Jump On Nehalem

HPCwire.com

April 2nd, 2009

Intel's Xeon 5500 series processor, the follow-on to Harpertown and the chip formerly known as the Nehalem-EP was launched this week, was launched this week and computer vendors the world over collectively exhaled their announcements onto the IT press.