As part of this year’s server upgrades, we put together a new vSAN cluster at work. The machines are Lenovo SR650 servers with dual Xeon Gold 6132 14 Core CPUs, and 768 GB of RAM. Each server is equipped with two disk groups consisting of one 800 GB write intensive SSD and three 3.84 TB SSDs for use as capacity drives. The servers are connected to the network using two of their four 10GbE interfaces, and to our existing storage solution using dual FC interfaces. The version of VMware vSphere we’re currently running is 6.5 u2.
As part of setting up the solution, we ran benchmarks using VMware’s HCIBench appliance, available as a VMware Fling from here. HCIBench was configured to clear the read/write cache before testing, but re-use VMs if possible. The “Easy Run” setting was used since it lets the benchmarking program create a workload based on the individual vSAN environment. The transaction test ran using 20 VMs with 8 data disks each, and the IOPS numbers represent a 100% random, 70% read load on a 4 kb block-size.
The first run was pretty much the out-of-the-box configuration: The network between the hosts had not been tweaked at all, and we ran the workers with the stock storage policy, meaning basic data mirroring without striping.
For the second run, we separated vSAN traffic to its own dedicated NIC, and allowed jumbo frames between the hosts.
In the third run we tried to discern what striping virtual disks across capacity drives does to performance by creating a storage policy with a stripe width value of 2, and assigning it to all worker VMs.
Finally, in the fourth run, we turned on Compression and Deduplication on the vSAN and re-ran the same benchmark to see how performance and latency were affected.
(For clarity: We did perform several more benchmark tests to confirm that the values really were representative.)
Throughput
The raw throughput performance numbers tells us whether we’re getting data through a connection as fast as possible. As seen by runs 2 and 3 in the graph below, we’re pretty much bouncing against the physical limits of our 12 Gbps SAS controllers and the 10GbE inter-host network. This value isn’t particularly relevant in real life other than that unexpectedly low numbers tell us we have a problem – see the result from run number 1 for a perfect example of that.

Transaction performance
The transaction performance in benchmark form is another one of those numbers that give you an idea of whether something is seriously wrong, but otherwise is a rather hypothetical exercise. Once again we are hitting numbers approaching what the hardware is capable of in the two middle runs.

Latency
Finally a number that has a serious bearing on how our storage will feel: How long does it take from issuing a request to the storage system until the system confirms that the task is done? The blue line represents an average for the test period – but remember that this is during extreme load that the vSAN is unlikely to see in actual use. The 95th percentile bar tells us that 95% of storage operations take less time than this to complete.

Thoughts on the results
The first run really sticks out, as it should: It’s an exposition of what not to do in production. Storage really should have its own dedicated network. Interestingly, though, from my admittedly limited experience, going up to jumbo frames (MTU=9000) didn’t by itself make a huge difference in performance, but it should result in a bit less strain on the hardware putting network packets together.
Curiously enough, I saw no relevant difference between just mirroring and striping + mirroring virtual machine disks once the cluster had settled. The numbers are very close, percentage-wise. This echoes VMware’s own words:
In most real world use cases, we do not see significant performance increases from changing the striping policy. It is available and you should weigh the added complexity against the need before changing it from the default.
https://blogs.vmware.com/virtualblocks/2016/09/19/vsan-stripes/
Finally we come to the run I haven’t really commented on yet: How much does performance suffer from the compression + deduplication option available in VMware vSAN? The simplified answer: About 20%, counted both in throughput and in transactional performance, and that doesn’t sound bad at all. But the latency numbers tell a slightly different tale: Average latency jumps up by a quarter, and 95th percentile latency by more than half. I see how the benefits of space-saving could make up for the drop in performance in some use-cases, but I would be wary of putting a heavily used production database on top of a storage layer that displays this sort of intermittent latency peaks.
In summary, vSAN on affordable hardware is slightly slower than a dedicated storage system like our IBM FlashSystem V9000, but that really says more about the wicked speed of the latter than being a negative against the former. For most real-world workloads in our environment the difference should be negligible, and well offset by the benefits of a fully software defined storage layer working hand-in-hand with the virtualization platform.