We talk to Panasas about the need for storage that can cope with delivering high volumes of small files for artificial intelligence with the throughput and latency needed to service costly GPUs

By

Antony Adshead,
Storage Editor

Published: 01 Sep 2022

We talk to Jeff Whitaker, vice-president for product marketing at scale-out NAS maker Panasas, about why storage is the key bottleneck in artificial intelligence (AI) processing.

In this podcast, we look at the key requirements of AI processing and how paying attention to the storage it requires can bring benefits.

Whitaker talks about the need to get lots of data into AI compute resources quickly and how some are tempted to throw compute at the problem. Instead, he argues that attention should be paid to storage that has the right throughput and latency performance profiles to deal with lots of small files, as found in AI

Antony Adshead: What are the challenges organisations face when it comes to storage for high-performance applications?

Jeff Whitaker: When it comes to high-performance applications…the application is trying to get to results fast. It’s trying to get to a decision, trying to get information back for the environment that is using the application.

There’s often a heavy reliance on the compute side of this, and sometimes an over-reliance. A lot of times that can be figured out, that can be resolved by [asking], what [does] a typical application environment look like? It’s compute, it’s network and it’s storage.

Read More