6.03.2020

Storage and VMware vSAN design tips

These days storage, even local storage, is more complex to understand with all the different options. These options include everything from Storage Class Memory to spinning disks. So this begs the question, "how do we choose what to attach to our servers?" Companies like Dell with it's VxRail product do provide a jointly engineered solution, so no matter what your requirements are an architecture can reliably be created. If your use case if for VDI, common server workload, or databases with heavy I/O, a solution can be created with success. Solutions like ready-nodes or simply picking parts off the VMware HCL are good solutions, however the success of the solution is up to the engineering prowess of the architect.

Storage is one of those critical pieces of infrastructure. It is the last chain in the data path from where we can listen to downloaded music, view favorite family and holiday photos, run that app on a daily basis. If a CPU or a memory stick dies or even if a network cable breaks typically no data is actually lost. However, if a drive dies all of our memories and productivity for at least that day is gone.

Desktops were typically backed up to some external tape or disk. Typically today, backups are sent to some type of remote or cloud resource. For servers, these can use larger variants of these resources however because more risk and expense can come with failed hardware. A little extra caution and effort placed when considering storage architecture. This includes the quality and built-in redundancy of the designs.

The other consideration is performance. Because of the number of drive choices, when evaluating performance and reliability, we have many different types of drives to consider. Our desktops and laptops typically use SSDs or NVMEs and now servers are typically designed with these. Considering performance below is a memory and drive performance table that displays latency with the 'human relatable' translation. ( #geeks #> ls -lh ) Most of this information was retrieved from Frank Denneman - AMD EPYC Naples vs Rome and vSphere CPU Scheduler Updates. I like how he correlated from 1 CPU cycle all the way to a SSD I/O. I added a typical 15K disk drive for additional impact on the comparison.

Memory and Drive Latency

Next I would like to delve into VMware vSAN because many of our datacenters are now turning to hyper-converged architectures that run vSAN, I thought I'd hit on some of the salient points.

Disk groups should be a key considerations when architecting for vSAN and how many to use per host. Another is all-flash verses hybrid. As the cost of flash based storage becomes even less expensive hybrid arrays do not make as much sense to implement. vSAN limits the feature set of hybrid compared to all-flash. Hybrid arrays are not capable of erasure-coding (RAID5/6) or compression and de-duplication. Hybrid designs will consume all cache you provide and use 70% for reads and 30% for write caching. 10% capacity of the capacity tier is the recommended capacity for the cache tier. However, a relative relationship exists between the cache tier capacity and the host memory consumed. Increasing the cache tier will cause and increase in host memory consumed. 

All-Flash typically makes more sense considering cost, heat, performance, and reliability. All-Flash is a little different in the case of features and cache. Specifically for cache 100% is dedicated to write caching however it's limited to 600GB. Larger capacity drives are supported and will enhance reliability due to write leveling. Keep in mind the goal is to flush cache to capacity and thus data protection. Read caching is not necessary. Flash drives do not have mechanical limits so I/O can occur more rapidly. For performance and to limit the amount of memory consumed away from VMs I prefer the Optane (375GB) drives matched with either SAS or SATA SSD capacity drives. VMware recommends architecting cache tiers with faster drives compared to the capacity tier. For example, leveraging all NVME drives in the capacity tier, Optanes are recommended in the cache tier.

Another consideration is that when using NVMEs Dell VxRail systems require dual processors. Check the vendor specifications for NVMEs and other directions with vSAN as using different drive technologies may require other server host considerations. I also prefer to use at least 2 disk groups per host especially in production due to the fact that if a cache drive fails the entire disk group fails. Using 2 disk groups per host will increase the availability of the architecture.

Ultimately, isn't that what we are after? Availability, reliability, and performance.



2.18.2020

My HomeLab

Current Lab configuration

vSphere 6.7 P1
vSAN All flash FTT=1 Raid 3+1
VRA 7.6
vROPs 7.5
vRLI 4.5
NSX 6.4.6
VLC 3.9.1

Total of 6 VMware Hosts

Supermicro X9DR3-F (Ebay for $200 each)
Supermicro 2U Chassis, 8 hot-swap 3.5"
128GB RAM each ($160)
Dual Intel(R) Xeon(R) CPU E5-2650L 8C @ 1.80GHz ($140)
Dell H310 LSI 2008 HBA (flashed to IT mode and Q-depth 600) ($40)
Emulex OneConnect OCe11102 Dual port 10Gb NIC ($40)
WD Raptor 300GB - Boot Drive
Misc Cables ($40)

Supermicro X9DRI-F+ (Ebay for $160 each)
Supermicro 2U Chassis, 8 drive hot-swap 3.5"
128GB RAM each ($160)
Dual Intel(R) Xeon(R) CPU E5-2650L v2 10C @ 1.70GHz ($140)
Dell H310 LSI 2008 HBA (flashed to IT mode and Q-depth 600) ($40)
Emulex OneConnect OCe11102 Dual port 10Gb NIC ($40)
WD Raptor 300GB - Boot Drive
Misc Cables ($40)

3 hosts based off each design.

3x E5-2650L based hosts = $1,650
3x E5-2650L v2 based hosts = $1,530

vSAN Storage
Cache Tier
Intel SSDSC2BX40 400GB (5)
Samsung NVMe 960 (1)

Capacity Tier
Samsung SSD 860 EVO 1TB (16)
Intel SSDSC2BX40 400GB (3)
Crucial CT240M50 250B (6)
Crucial CT480M50 480GB (1)
M4-CT256M4SSD2 250GB (1)
OCZ-Agility3 250GB (2)

Storage SAN / NAS

FreeNAS - 69.6TB
X8DTH-6F - ($400)
Supermicro 4U Chassis, 36 drive hot-swap 3.5"
Dual Intel Xeon L5630L 4C 2.13GHz ($50)
48GB RAM ($75)
Boot Drive
10K SAS 500GB
Disk Group 2 - RAIDZ 6 - 18.1TB
10x various 2TB 7K RPM disks
Disk Group 1 - RAIDZ 6 -  19TB
7x various 3TB 7K RPM disks
ARC Cache 40GB
Disk Group 3 - RAIDZ 6 -  32.5TB
9x various 4TB 7K RPM disks
ARC Cache 60GB

Networking
2x IBM G8124-E - 24 port 10Gb SFP+ ($850)
4x SFP+ 1Gb GBICs ($80)
Cisco SG300-28 ($528)
Cisco SG200-26P ($250)

8.08.2019

VMware vSphere learning paths


These days there is plenty of training for all things vSphere. The issue has become what is a good path to either a specific certification or simply learning because you want to become more proficient with day-2 administrative activities. A colleague came to me the other day with this delema so I decided to put together a quick list of free and paid training resources.

VMware Hands on Labs can be a useful free tool in learning about many different VMware products in a safe isolated environment. The following are a couple useful labs for learning more about vSphere.

HOL-1910-01-SDC - Virtualization 101: Introduction to vSphere
HOL-1911-91-SDC - vSphere 6.7 Lightning Lab
HOL-1911-01-SDC - What's New in VMware vSphere 6.7
HOL-1911-02-SDC - VMware vSphere with Operations Management - Getting Started
HOL-1911-03-SDC - VMware vSphere with Operations Management - Advanced Topics
HOL-1911-04-SDC - VMware vSphere Security - Getting Started
HOL-1911-05-SDC - VMware vSphere Automation - PowerCLI

VMware Learning Zone provides some free and paid on demand classes.

All VMware vSphere classes


VMware ICM
One of these following classes are required for a VCP certification

VMware ICM Fast Track
Additional material is taught in this class compared to the ICM class. The classes also typically run from 8AM to 6PM for the week.

VMware vSphere Operations
These are not required but can help with the test although the focus is on day-2 administrative tasks on the vSphere platform.

Certification Learning Paths.
This will provide you the path you need to follow to obtain a certification.

Other resources include blogs and community forums. Purchasing VMUG Advantage provides lab licenses for most VMware products. Building a ‘HomeLab’ can be a good way to practice without using the corporate environment to practice on. Also, your company may not own licenses for all products and all features of those products where VMUG Advantage does provide full featured licensing.

VMUG Advantage (there are codes all over the interwebs for 10% off)
https://www.vmug.com/vmug2019/membership/vmug-advantage-membership

List of blogs and other resources

Hope these help you in your VMware journey!

7.24.2019

Dynamic DNS

Dynamic DNS, even standard DNS services that offer many configurable options can be expensive. The free ones or the ones that come with domain name registration are typically limiting and most do not support dynamic IPs. I have been using one for a number of years from a co-location and service provider called Hurricane Electric http://he.net/.

I learned about this provider while living in the Bay Area outside San Francisco. They would host a Linux user group and as a matter of fact still do 20 years later! EBLUG http://www.eblug.org/

One of the many great services HE provides is a free DNS service that contains the ability to configure dynamic DNS entries in the event you have a dynamic IP on your internet connection or need an easy way to failover some internet facing service with something a little less expensive than a GSLB. The service has been limited to 50 zones for the free version. Com'on!! For real??? Everyone owns more than 50 domain names... NOT! This is super cool of them to not only offer this service for real but then allow users to host 50 zones!

Getting started is easy. Once you have a domain registered, either new or existing, simply point your root name servers to HE's servers. Let's use the domain vmuglabs.net. I use GoDaddy for my domains so once in DNS management browse over to Nameservers. Once there change the GoDaddy name servers to HE's. They are:

ns1.he.net
ns2.he.net
ns3.he.net
ns4.he.net
ns5.he.net



Now if you don't have an account at HE go over to https://dns.he.net and register for an account.



Once logged in you can add a new zone or domain from the menu on the left



Once created you can edit the zone by selecting the edit icon just to the left of the domain name. Within the zone you will find 6 total records, 1 SOA and 5 NS records. Next is to create an A record and investigate how the ddns option works.



Once created you will need a way to authenticate to dynamically change the IP for the A record. HE uses a DDNS key, not your login account. To generate one select the change symbol.



Generate a key and copy it.


Once you have the key its time to build the bash script to facilitate the ddns change. the code can be located in github.

Run this script and if your IP changes your DNS record will be changed. To test you can manually change the IP within the HE DNS console and observe it changing back when the script is run.

3.12.2019

Fix MSDTC for VRA Install Wizard Validation

Did you use a template to create the IaaS servers for VRA? This is a quick post on how to resolve the errors from the VRA validator step. Perhaps like you I had some trouble locating a concise KB article or post on an easy way to resolve these issues.

Reset the CID/SID of the Server

Log into the IaaS and DB servers as Administrator.

Opening REGEDIT can show what the CID/SID values are. This is located:
HKEY_CLASSES_ROOT\CID\(CID)\Description\(Default)

Open a powershell prompt as administrator and run the command:

Uninstall MSDTC
msdtc -uninstall

Reboot
shutdown -r -t 0

Re-install MSDTC (login with same permissions as above)
msdtc –install

Warning: The msdtc command does not give any return response when running this command.

Open the Firewall

Enable the firewall rules for WMI and DTC on both computers by using the Netsh utility. This

netsh advfirewall firewall set rule group="Windows Management Instrumentation (WMI)" new enable=yes
netsh advfirewall firewall set rule group="Distributed Transaction Coordinator" new enable=yes


Testing

Basic checking can be done by opening the Component Services MMC. You should see something similar.
Component Services MMC for MS DTC


Run the DTCtester to test the state of MSDTC. Below are some example tests that can be run to test local and both local and remote DTC connectivity.

Test MSDTC on the local computer
Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -Verbose

Test MSDTC on the local computer and a remote computer
Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -RemoteComputerName "remote-server" -ResourceManagerPort 17100 -Verbose

Test MSDTC on a local computer that blocks inbound transactions
Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -RemoteComputerName "remote-server" -ResourceManagerPort 17100 -Verbose

Test MSDTC on a local computer that blocks outbound transactions
Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -RemoteComputerName "remote-server" -ResourceManagerPort 17100 -Verbose


This is the result if the first test partially fails. The 3 local and remote tests will also show the CIDs for the communicating systems. Referring to the REGEDIT above will display the UIS and the XA values that are contained in the CID subkeys.. From this output you will be able to determine if the CIDs are unique as another method to validate the registry values.

PS C:\Windows\system32> Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -Verbose
VERBOSE: ": Firewall rule for "RPC Endpoint Mapper" is enabled."
VERBOSE: ": Firewall rule for "DTC incoming connections" is enabled."
VERBOSE: ": Firewall rule for "DTC outgoing connections" is enabled."
VERBOSE: IN-SQL02: AuthenticationLevel: Mutual
VERBOSE: IN-SQL02: InboundTransactionsEnabled: False
WARNING: "IN-SQL02: Inbound transactions are not allowed and this computer cannot participate in network transactions."
VERBOSE: IN-SQL02: OutboundTransactionsEnabled: False
WARNING: "IN-SQL02: Outbound transactions are not allowed and this computer cannot participate in network transactions."
VERBOSE: IN-SQL02: RemoteClientAccessEnabled: False
VERBOSE: IN-SQL02: RemoteAdministrationAccessEnabled: False
VERBOSE: IN-SQL02: XATransactionsEnabled: False
VERBOSE: IN-SQL02: LUTransactionsEnabled: True


This is the result when things look good for the installer to proceed.

PS C:\Windows\system32> Test-Dtc -LocalComputerName "$env:COMPUTERNAME" -Verbose
VERBOSE: ": Firewall rule for "RPC Endpoint Mapper" is enabled."
VERBOSE: ": Firewall rule for "DTC incoming connections" is enabled."
VERBOSE: ": Firewall rule for "DTC outgoing connections" is enabled."
VERBOSE: IN-SQL02: AuthenticationLevel: Mutual
VERBOSE: IN-SQL02: InboundTransactionsEnabled: True
VERBOSE: IN-SQL02: OutboundTransactionsEnabled: True
VERBOSE: IN-SQL02: RemoteClientAccessEnabled: True
VERBOSE: IN-SQL02: RemoteAdministrationAccessEnabled: True
VERBOSE: IN-SQL02: XATransactionsEnabled: False
VERBOSE: IN-SQL02: LUTransactionsEnabled: True


Summary

This is only one example of how to resolve these errors. If you used a template and a customization spec as you deploy while selecting "Generate New Security ID (SID)" your experience might be different.

8.23.2018

IBM 10G Switch - The Home Lab Gem

I came across the IBM G8124 while providing some pre-sales architecture to some of my clients. As HomeLab'ers it's difficult to afford some modern datacenter switch that we can afford. Most all 10G switches are over $1000 unless you are looking on the used market and then most of the switches are old and power hungry. It's easy to locate these on EBay and the prices have been dropping as they get a little older. Because most of us are using SDN (software defined working) they work very well in low cost lab situations where 10G offers some really nice benefits. When paired with some Emulex OCE11102 dual port 10G NICs it's possible to get a full 10G network for less than $500.




The G8124 is considered a Top-of-Rack switch that maintains some incredibly low port to port latency, about .600 nanoseconds. It also supports Virtual Fabrics and L3 routing with OSPF. This switch offers some really nice features when fitted in a HomeLab where VMware vSAN, NSX, VRA and other goodies want to be learned. If this solution is of interest one thing to note is for connectivity to the rest of your network you will need to have either a 10G interface in your existing switch or you will need a 1G SFP interface. The 2 x 1G interfaces are strictly for out-of-band management. Many IBM systems feature 2 dedicated management interfaces that require a different network than any SVI assigned and each management interface are required to reside on different networks as well. It is possible to only use a single management interface or manage the switch through one of the SVIs. While I wouldn't recommend this in a production environment for a lab, have at it, knock yourself out.

The config is a little different than the Cisco language but not very difficult to get past if you are familiar with the concepts. Documentation and firmware are still available from IBM. Below are some links for this and model information.

Firmware and Docs
https://www.ibm.com/support/home/search-results/5422459/IBM_RackSwitch_G8124,_8124E_-_7309,_0446,_1455_7309?docOnly=true&sortby=-dcdate_sortrange&ct=rc

Model Info
https://lenovopress.com/tips0787

Example code (shortened to remove redundancy)

!
version "7.11.9"
switch-type "IBM Networking Operating System RackSwitch G8124-E"
iscli-new
!
system timezone 145
! America/US/Eastern
system daylight
!
ssh enable
!
snmp-server location "CloudRoom"
snmp-server read-community "HNET"
!
no system bootp
no system dhcp mgta
no system dhcp mgtb
no system default-ip
hostname "10gNET"
no hostname prompt
system idle 60
!
!
no access telnet enable
!
!
interface port 1
        switchport mode trunk
        switchport trunk allowed vlan 1,3,5,10,70-71,80-85,98-102,201-209,250,252,301-339
        bpdu-guard
        flowcontrol send on
        flowcontrol receive on
        exit
!
interface port 11
        switchport access vlan 98
        bpdu-guard
        exit
!
interface port 12
        switchport mode trunk
        switchport trunk allowed vlan 1,3,5,10,70-71,80-85,98-102,201-209,250,252
        exit
!
interface port 15
        switchport access vlan 202
        flowcontrol send on
        flowcontrol receive on
        exit
!
interface port 16
        switchport access vlan 201
        flowcontrol send on
        flowcontrol receive on
        exit
!
interface port MGTA
        shutdown
        exit
!
interface port MGTB
        shutdown
        exit
!
vlan 10
        name "LAB1"
!
vlan 70
        name "VLAN 70"
!
vlan 201
        name "iSCSI-201"
!
vlan 202
        name "iSCSI-202"
!
vlan 205
        name "ESXi-vMotion"
!
vlan 206
        name "ESXi-FT"
!
vlan 250
        name "Home-NET"
!
vlan 252
        name "GuestNET"
!
portchannel 13 lacp key 100
portchannel 14 lacp key 101
!
!
!
spanning-tree mst configuration
        name "local"
        exit
!
spanning-tree mode disable
!
no spanning-tree pvst-compatibility
spanning-tree stp 1 vlan 1
spanning-tree stp 1 vlan 3
!
!
logging host 1 address 192.168.98.48 DATA
!
interface port 13
        lacp mode active
        lacp key 101
        no lacp suspend-individual
!
interface port 14
        lacp mode active
        lacp key 101
        no lacp suspend-individual
!
interface port 23
        lacp mode active
        lacp key 100
        no lacp suspend-individual
!
interface port 24
        lacp mode active
        lacp key 100
        no lacp suspend-individual
!
interface ip 1
        ip address 100.64.254.254 255.255.255.0
        enable
        exit
!
interface ip 3
        vlan 3
        exit
!
interface ip 70
        ip address 192.168.70.254
        vlan 70
        enable
        exit
!
!
ip bootp-relay server 1 address 192.168.98.21
ip bootp-relay server 2 address 192.168.98.22
ip bootp-relay information enable
ip bootp-relay enable
!
!
ip igmp snoop vlan 1
ip igmp enable
ip igmp snoop enable
!
ip igmp snoop igmpv3 enable
!
ip route 0.0.0.0 0.0.0.0 100.64.254.1
ip route 192.168.251.0 255.255.255.0 192.168.250.245
ip route 192.168.8.0 255.255.248.0 192.168.250.250
!
router ospf
        enable
!
        area 0 enable
!
interface ip 1
        ip ospf enable
!
ntp enable
ntp primary-server 192.168.98.21 DATA
ntp secondary-server 192.168.98.22 DATA
!

end

6.19.2015

vSphere (and others) LAB storage

Some of you may know I have been building and using a vSphere lab for a number of years now as most VMware professionals. Recently the SAN platform I've been using for a couple years, Nexenta, has removed/disabled VAAI support from their software because of some issues so I decided to try the other popular FreeNAS since it's been rapidly maturing.

For the most part my 3 Nexenta SANs have been running fine until a HDD dies at which time the SAN would lock and require some coaxing and perhaps a power cycle to come back alive. With some of the recent changes to the platform, removing VAAI, I decided it was time to give FreeNAS another try.

For those of you involved in some way with VMware vSphere you know that VAAI was a very important advancement in storage function and management. It provides primitive functions to allow the storage controller to do the work only sending progress updates to the hosts cutting down on latency and storage fabric utilization. Nexenta used to provide 3 of the commonly used and 1 of the uncommonly used primitives. https://v-reality.info/2011/08/nexentastor-3-1-adds-second-generation-vaai/
They have removed VAAI in the recent patches 4.0.3FP2 due to "kernel panic issues". What they failed to realize is this is a SIGNIFICANT change to a storage infrastructure. It's easy to introduce from a traditional non-VAAI design but once a storage architecture is designed for VAAI it's nearly impossible to go back. FreeNAS 9.3 supports 5 primitives, you get a bonus one. http://www.ixsystems.com/whats-new/freenas-93-features-support-for-vmware-vaai/
1 particular primitive, ATS, allows us to make LUNs much larger in size since only VMDK operations happen at the file level instead of the entire LUN. This allowed us to make larger LUNS since having more then 10 or 15 VMs in a LUN since the host would not lock an entire LUN for a single file operation causing the rest of the VMs to be impacted. Further FreeNAS also includes Warn&Stun which provides the host with some more intelligence about a thin provisioned VM reducing crashes.

FreeNAS has also been making many other improvements to the platform. One major one was the migration from iSCSI target software being moved from user space to kernel space. After some 'seat of the pants tests' compared to earlier releases this seemed to provide a nice 30% improvement in performance.

Installing 9.3 FreeNAS is as simple as it's always been, a couple presses of <ENTER> and it's installing. One nice feature is you have the ability to install to USB where Nexenta cannot. However make sure you create SWAP on a disk once you have it installed. Being BSD based compared to openSolaris you have a much wider array of hardware choices. Going from Nexenta to FreeNAS you should have no issues. The community forms and docs provide some good direction for hardware and firmware versions. For example using the standard LSI HBAs you know to use the P16 firmware version. The other cool feature is FreeNAS does not limit you to 18TB of RAW storage.


I've now been running FreeNAS as the main LAB storage san for a couple days now and I'm rather impressed with it's performance and stability. Nexenta, I couldn't always say this...