+1-415-954-2800 Support

Leveraging Persistent Memory for Real-Time Analytics Workloads

There is a quickly-developing class of real-time analytics and machine learning workloads that require immense data sets. Conventional methods and tools for data management and storage struggle to cope with the speed and accessibility needs of these workloads. Unfortunately for most data scientists, this problem is only going to get worse.

According to IDC, worldwide, data is growing at a 26.0% CAGR, and in 2024 there will be 143 zettabytes of data created. However, what people don’t know is that real-time data is growing much faster. Real time data was less than 5% of all data in 2015, and now is projected to comprise almost 30% of all data by 2024. IDC also projects that by 2021, 60-70% of the Global 2000 will have at least one mission-critical real-time workload.

To reach adequate performance levels, users must find ways to remove the bottlenecks that slow down their access to data. At Penguin Computing™, the best approach to this problem we have seen is to keep your data completely in-memory with a technology like MemVerge, which provides scale-out DRAM and persistent memory, so your data is in constant contact with your compute resources.To learn more about this emerging concept, watch this Big Memory case study by the Senior Vice President of the Penguin Computing’s Strategic Solutions Group, Kevin Tubbs, PhD. Kevin describes how the Facebook Deep Learning Recommendation Model (DLRM) needs terabytes of memory for embedded tables and how Optane PMEM and MemVerge Memory Machine Software will enable customers to scale memory for DLRM, and across their data center, embedded, and wireless (IoT) businesses.