This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

Protecting data with a high level of resiliency is a must
13

Flash, savior of the Metaverse

Philippe Nicolas, Founder and Analyst, Coldago Research

How is the rise of AI and machine learning reshaping storage architectures?

We need to consider two aspects: Storage for AI and AI for or in Storage.

Storage for AI puts pressure on the storage layer and we see essentially the confirmation of parallel access and the disaggregation architecture where the access layer and the storage layer are decoupled. It helps to maximize performance at scale in IOPS, bandwidth, and latency, three key metrics for demanding I/O environments. Clearly, Flash Media and NVMe connectivity, local or remote, are two fundamental elements.

AI for or in Storage is different. Users expect new services from vendors in existing or new solutions thanks to AI extensions or integration.

At Coldago, we include these two dimensions in our recent studies and reports.

What will be the biggest shifts in enterprise data storage technology over the next two to five years?

Full Flash Data Center is a reality pushing HDD and tape to secondary storage for protection. Production undoubtedly relies on Flash. NVMe is the new reference in connectivity for media and its network companion is paramount.

All vendors are working on their disaggregated architecture offerings. They all understand this “any to any” model is the new direction.

The notion of platform is also important in consolidating usages and data in global instances able to present the same content with various access methods, with files protocols or S3 API.

The U3 – Universal, Unified and Ubiquitous – Storage model, that I introduced almost 15years ago, is even more of a real approach today, perfectly illustrated by cloud flavor, edge computing and core data centers.

How do you foresee the demand for all-flash and full-stack solutions developing?

All-flash, full-stack solutions and deployments are already a reality, with cost probably the only limiting factor as now the biggest capacity is in favor of SSD with 128TB announced by a few players. Clearly the advantage HDD, even tape, had in the past disappeared in favor of SSD. And look at the density, Flash media really leads the pack here. They’re all growing but at a different pace and Flash and SSD prices will continue to drop.

So, is there a place for other storage media technology? The answer depends on whether it’s secondary storage for protection – backup or archive – where definitely HDD and tape are a very good choice, but for primary storage or production tier, we’ve definitely entered into the flash era.

At MWC 2025, Huawei demonstrated the OceanStor Pacific 9928 with 36 NVMe SSD planned to receive 128TB capacity drives that deliver 4PB raw in 2U with just 0.25 watt per TB: pretty impressive. Cost still is in favor of HDD and tape at a media level, but it is more relevant to compare systems or arrays with other coupled services. So, the role of the solutions is the key criteria.

What are the relative merits of business-controlled storage platform tools as against a Cloud-based storage model – Storage as a Service? What factors should enterprises consider when assessing which is best for their requirements?

Not all applications and users fit in the cloud but everyone loves the Cloud Operating Model – how it is purchased, charged, deployed, upgraded and supported. So Storage-as-a-Service appears as a good model wherever it is deployed, even on-premises. Its beauty is the harmonization of the storage service and infrastructure, whatever units, devices or services are deployed.

Of course, maintenance, support, upgrade and technology refresh or evolution are key items to pay attention to. And again the TCO is paramount.

The key player here is the vendor partner who knows the vendor’s solutions and users’ environments. Users must select the right partner to help them select the right solutions for their intelligent transformation.

With ransomware threats growing, how can businesses best protect their storage infrastructure?

We already have many tools, techniques and processes to detect and prevent ransomware, and users have to ask the right questions of vendors about their dedicated extension for such threats. Key to gaining trust is knowing whether vendors have been successful against past threats. Demo and speaking with other customers are also key, data sheets are not enough. So, the entire chain is important, from encryption, snapshot, CDP with versioning, replication, air gap, users’ roles and access. Data redundancy is critical here, with the capability to stop attack propagation coupled with the function to detect and potentially prevent it but the fundamental function resides in the action of restoring or re-exposing original intact data.

How can storage vendors balance sustainability and energy efficiency with high-performance demands?

It’s a hot topic amidst the new AI wave we live in. Three dimensions are key: erasure coding, data reduction and power optimization.

At scale, protecting data with a high level of resiliency is a must as multiple copies only deliver redundancy but the cost is clearly prohibitive. The best approach appears to be erasure coding and it exists with various implementations.

Data reduction with compression and deduplication to reduce occupied data surface is also a key contributor.

Optimization of energy, electricity consumption, is also fundamental and all users expect to do more with less, to support more IT services with fewer devices, less energy… and if possible with reduced footprint.

If all these three functions could be combined it clearly accelerates and multiplies the final effect.