Musings on VDI – DB Booster

My preference on software defined (storage) solutions is not a big secret any more, nevertheless it gets frequently a new spin. Recently so, we observed some strange behavior on our storage leaking over into the corporate ERP systems but primarily caused by our VDI environment and some write patterns showing up there.

Starting to think over the solution space, the typical suspects of course didn’t hesitate to offer 6-digit priced all flash storage systems to leverage the issue. Not exactly the type of budget I want to request for a unforeseen development in systems usage (besides all the foreseen changes and projects).

But did anybody reflect over the latest development in CPUs, PCI- buses and SSD/NVMe? I started to munch further down the isle reflecting over a couple of things:

So first off all think of PCI express bus development, with PCIx 4.0 since 2017 already providing 31+ GB/sec on 16 lanes memory bandwidth. PCIx 5.0 released in 2019 doubled to 63+ GB/sec and PCIx 6.0 doubling again in 2021 to 120 GB/sec already being standardized. Considering 12G SAS / SATA – even doubling to 24 within the next two years looks like the next bottleneck already materializing.

Given the fact that NVMe right now is still rather pricey, since we the customers are willing to pay a premium, there is no reason in that price difference material- or manufacturing wise. So assuming that these prices will drop rather soon, why should I bother with HBAs at all.

All of a sudden the latest development in CPUs makes a lot of more sense. So comparing e.g. a state of the art Xeon gold 6212U provides with 48 lanes PCIexpress 3.0 about 47,25 GB/s data bandwidth – comparing to a AMD EPYC 7502P with 128 PCIexpress 4.0 lanes closing in to 252 GB/sec data bandwidth which is already significantly faster than the presumed 208 GB/sec memory bandwidth stated in the datasheet.

Considering the speed improvement of PCI, presumingly showing up in industry standard servers within the next three years, the translation onto a storage bus as SAS or SATA to access SSDs seems more and more ridiculous. NVMe, particular M2 as formfactor, supports PCI 4.0 right away and carrier boards with four slots for modules are widely available.

Checking the operating systems support side, NVMe is supported widely already for quite a while as for Linux since 2011, Illumos 2014, VMWare supports since version 6.0 and Microsoft backported as far as Windows Server 2008R2. So given our cleaned up release managed environment I see no traps as well.

So with these considerations in mind I started to configure some servers with 384 GB Memory and both SAS and NVMe SSDs for about 10 TB mirrored internal storage. Non HA requirements but a plain speed monster. And guess what – with 12* 1.6TB Flash devices the configurations ended up for less than 25 KEUR in the first place. Including two dual port 10G SFP+ Ethernet Cards for 40GB uplink. Nexenta is licensed per TB and not exactly a costfactor for such a small installation. Illumos and Linux are supported off the shelf.

The largest surprise was that meanwhile, due to the reduced needs in interfacing the NVMe offers are not really more expensive than the SAS SSD based systems.

We have not yet decided on the project and the fine tuned final quotes are not laid, but assuming that no nasty surprises show up and we work through the appropriate HCLs without any draw backs, the current budgetary indications state, we get a 12+TB VDI Booster for less than [Edit 27.01.2020] 17. KEUR [/Edit].

That speaking, since we don’t need that necessarily only for VDI we started to evaluate the use to relocate database transaction drives on this system. For non critical systems (reminder, we spared high availability) this promises to increase the database performance significantly as well. And if this works, this gives an entire new approach to think over hyper-converged solutions at all.

During the calculations I stumbeled ofer NVMe over fabric. Given the state of 16 and 32 GBit FibreChannel networks, the losses in 10/8 coding, the bus speeds and the limitations aka. device convergence in the target systems – I really see no advantage in bringing NVMe back to FC, except for instance in HA cluster solutions.

As soon as we bring this brainchild to live, I will kyp. F.

2 thoughts on “Musings on VDI – DB Booster

  1. Pingback: How To: First Built Ever – AMD Ryzen 7 3800X | Frank @ MyBenke.org

  2. Pingback: AMD vs. Intel – Hat Intel das kommen sehen ? | Frank @ MyBenke.org

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.