Why is Array Integration with VMware So Critical?
Do you enjoy playing games like “telephone” and “charades”? It’s fun to guess what other people are thinking when sitting around the campfire or kitchen table. But guessing the motivation and intent of others has no place in enterprise IT. This is why “integration” will become the key feature of IT products for the next decade: Virtualization and cloud are adding layer upon layer to IT infrastructure, and the time has come for communication mechanisms that allow clear messages to cut through the haze. This is the intent of VMware’s vStorage API for Array Integration (VAAI), and why I consider it to be one of the most important developments in enterprise storage in the last decade.
VAAI was introduced in vSphere 4.1 and quickly became one of the most commented about features in VMware blogs and user groups. This was not because of some cutting-edge feature in VAAI. In fact, the three basic functions (known as “primitives” in VMware parlance) are technical “nuts and bolts” things. But techies immediately recognized the value of integrating the VMware hypervisor with storage arrays, since storage performance is so critical to server virtualization environments.
It’s hard to say which of the three VAAI primitives are most important, but the block zeroing command is perhaps most implemented. This is a communication mechanism that allows VMFS to notify a storage array when space in a VMDK file is no longer needed. A thin provisioning capable array can then reclaim that capacity and use it for some other purpose. VAAI blog zeroing uses either custom interfaces or the standard T10 command that many arrays support.
VAAI full copy can be a real timesaver, allowing the hypervisor to command the storage array to make a mirror of a SCSI LUN rather than reading and writing the entire contents over the storage network. Only custom array interfaces are supported in vSphere 4, but version 5 supports a standard T10 command as well.
The final VAAI primitive is harder for many people to comprehend, since it does not actively cause the array to perform a function. Hardware assisted locking, also known as “atomic test and set” allows multiple virtual machines to share a single SCSI LUN without blocking each other’s I/O. This can really smooth performance when clustered virtual machines are in use. The effects are so noticeable that VMware decided to implement similar code throughout vSphere 5.
Expanding VAAI Support
vSphere 5 enhances VAAI in two important ways: additional primitives are added for block storage, and NFS devices are supported for the first time.
On the block storage side (Fibre Channel and iSCSI devices), VAAI now has the ability to reclaim space in VMFS after a storage vMotion or VMDK deletion, and thin provisioning using the standard SCSI UNMAP command is added. VMware also “stuns” virtual machines if a thin provisioning storage array runs out of space.
But the big news for VAAI in vSphere 5 is the addition of NFS support. This reflects the growing importance of NFS as a storage protocol to support server virtualization, with some analysts suggesting it has been implemented even more widely than iSCSI.
VAAI in NFS includes a totally different set of primitives, reflecting the strengths and weaknesses of the protocol. NFS was always excellent at handling thin provisioning, but administrators sometimes would rather reserve capacity. This is the intent of the “Reserve Space” primitive, which instructs a thin array to be “thick.” VAAI for NFS also includes an extended statistics API, allowing vSphere to query arrays about capacity and “thin-ness.”
Two more VAAI-NFS primitives exist, and both are focused on data protection. Full File Clone functions much like Full Copy on block storage arrays, allowing the hypervisor to instruct the array to clone a volume. Finally, Native Snapshot Support will enable VMware View environments to leverage NFS snapshots, though this primitive is not yet implemented elsewhere.
VAAI for NFS in vSphere 5 differs in another major way from the block support. Rather than bundling plugins, VMware leaves it to NFS device makers to distribute their own enablers. It remains to be seen what this means for time-to-market and supportability of VAAI-NFS.
Better Communication = Better Performance
The upshot of all this integration is better storage performance, and this will appear in many forms. Streamlining thin provisioning operations reduces I/O and enables better capacity utilization. Copy offloading functions similarly allow arrays to do what they do best – move and protect data. And all of the VAAI functions are transparent once enabled: They just work. This is perhaps the best feature of all!