6.03.2020

Storage and VMware vSAN design tips

These days storage, even local storage, is more complex to understand with all the different options. These options include everything from Storage Class Memory to spinning disks. So this begs the question, "how do we choose what to attach to our servers?" Companies like Dell with it's VxRail product do provide a jointly engineered solution, so no matter what your requirements are an architecture can reliably be created. If your use case if for VDI, common server workload, or databases with heavy I/O, a solution can be created with success. Solutions like ready-nodes or simply picking parts off the VMware HCL are good solutions, however the success of the solution is up to the engineering prowess of the architect.

Storage is one of those critical pieces of infrastructure. It is the last chain in the data path from where we can listen to downloaded music, view favorite family and holiday photos, run that app on a daily basis. If a CPU or a memory stick dies or even if a network cable breaks typically no data is actually lost. However, if a drive dies all of our memories and productivity for at least that day is gone.

Desktops were typically backed up to some external tape or disk. Typically today, backups are sent to some type of remote or cloud resource. For servers, these can use larger variants of these resources however because more risk and expense can come with failed hardware. A little extra caution and effort placed when considering storage architecture. This includes the quality and built-in redundancy of the designs.

The other consideration is performance. Because of the number of drive choices, when evaluating performance and reliability, we have many different types of drives to consider. Our desktops and laptops typically use SSDs or NVMEs and now servers are typically designed with these. Considering performance below is a memory and drive performance table that displays latency with the 'human relatable' translation. ( #geeks #> ls -lh ) Most of this information was retrieved from Frank Denneman - AMD EPYC Naples vs Rome and vSphere CPU Scheduler Updates. I like how he correlated from 1 CPU cycle all the way to a SSD I/O. I added a typical 15K disk drive for additional impact on the comparison.

Memory and Drive Latency

Next I would like to delve into VMware vSAN because many of our datacenters are now turning to hyper-converged architectures that run vSAN, I thought I'd hit on some of the salient points.

Disk groups should be a key considerations when architecting for vSAN and how many to use per host. Another is all-flash verses hybrid. As the cost of flash based storage becomes even less expensive hybrid arrays do not make as much sense to implement. vSAN limits the feature set of hybrid compared to all-flash. Hybrid arrays are not capable of erasure-coding (RAID5/6) or compression and de-duplication. Hybrid designs will consume all cache you provide and use 70% for reads and 30% for write caching. 10% capacity of the capacity tier is the recommended capacity for the cache tier. However, a relative relationship exists between the cache tier capacity and the host memory consumed. Increasing the cache tier will cause and increase in host memory consumed. 

All-Flash typically makes more sense considering cost, heat, performance, and reliability. All-Flash is a little different in the case of features and cache. Specifically for cache 100% is dedicated to write caching however it's limited to 600GB. Larger capacity drives are supported and will enhance reliability due to write leveling. Keep in mind the goal is to flush cache to capacity and thus data protection. Read caching is not necessary. Flash drives do not have mechanical limits so I/O can occur more rapidly. For performance and to limit the amount of memory consumed away from VMs I prefer the Optane (375GB) drives matched with either SAS or SATA SSD capacity drives. VMware recommends architecting cache tiers with faster drives compared to the capacity tier. For example, leveraging all NVME drives in the capacity tier, Optanes are recommended in the cache tier.

Another consideration is that when using NVMEs Dell VxRail systems require dual processors. Check the vendor specifications for NVMEs and other directions with vSAN as using different drive technologies may require other server host considerations. I also prefer to use at least 2 disk groups per host especially in production due to the fact that if a cache drive fails the entire disk group fails. Using 2 disk groups per host will increase the availability of the architecture.

Ultimately, isn't that what we are after? Availability, reliability, and performance.



2 comments:

  1. https://rahaco.net/%D9%85%D8%AC%D8%A7%D8%B2%DB%8C-%D8%B3%D8%A7%D8%B2%DB%8C/

    ReplyDelete
  2. When splitting, a further wager equal to the original bet must be made, and a hand signal must be given 1xbet korean to the dealer. The player will play the first hand until happy, give a stand hand signal, after which full the second hand. When splitting Aces, the player receives just one card for every hand. Ignition Casino is America’s go-to online on line casino for real cash payouts throughout 300+ slots, table video games and big cash poker tournaments. Get ready for the best live on line casino and poker experience online, rating big payouts with Hot Drop Jackpots and more.

    ReplyDelete