The current generation of storage arrays are not designed to make use of NVMe flash storage. They need a radical architectural overhaul to make use of the NVMe protocol, which can boost flash storage performance by a factor of tens or hundreds.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
NVMe is a protocol based on peripheral component interconnect express (PCIe) designed to transport data to and from flash drives. It eliminates the need for the storage stalwart SCSI protocol and boosts the number of queues and queue depths possible by many orders of magnitude. In doing so, NVMe allows flash drives to operate at their full potential.
But currently that is only realisable by putting NVMe drives straight into server PCIe slots. To build a fully-featured storage array with NVMe is out of reach at present.
That’s because when many NVMe drives are aggregated as shared storage controller hardware is required to manage protocol handling, physical addressing and provisioning, as well as storage services such as data reduction, replication and encryption.
“The real problem that arises with flash is that the balance of performance has changed massively over the past 10 years in terms of compute,” said O’Neill.
“If you put NVMe in the hosts still talk using SCSI. If there’s not NVMf [NVMe-over-fabrics] between host and storage the protocol on the front end becomes a bottleneck.”
NVMe-over-fabrics is a transport method that bundles NVMe into networking protocols (RDMA, Fibre Channel) that can carry the protocol over longer distances, in other words, between hosts and storage arrays.
But to offer shared storage as we know it via a storage with storage services the controller must be able to bring to bear the processing power needed to handle controller functionality.
This is where current products lack the architecture required. “Many things need to be solved to allow the full suite of storage services to run at full speed,” said O’Neill. “Namely optimising controller software.
“Most arrays are designed for active/passive operations and can’t scale beyond a pair of dual controllers. Some are active/active that can scale beyond that but when you add a second pair it creates two pairs, and what you need is clustered multiple controllers in a scale-out architecture.”
Kaminario’s interest here is that it claims its controllers are built to scale out, but they do not yet have full NVMe or NVMf functionality.
O’Neill would not be drawn on when his company would launch such functionality. “We are working towards this and are looking forward to announcing something,” he said. “We’re not talking long-term timescales, but I can’t say when that will be.”