My take on Windows Storage Space Direct 2019

And how exactly went my take on Windows 2019 with Storage spaces direct?
Rough, kind of a love-hate relationship.

Let’s be clear this is a nice system for large, professional deployments, could do very good work when mixed with a Hyper-V cluster, or maybe an SMB share, but that’s about it.

So I managed to get a storage spaces direct cluster running on my thin clients, paired with USB devices. First of all, of course you need Active Directory. It’s Microsoft technology, no way around it I guess. So I ran Hyper-V on the thin clients to separate my Domain controllers from my S2D nodes.
Now, to manage S2D, you’d better be using Windows Admin Center. And don’t go quick an dirty by installing Windows Admin Center on your Hyper-V host (it’s a lab after all), won’t work! It returns some shady error about WinRM not being configured etc. It started working after I installed it in a fresh VM (shame on me for trying take short-cuts, run multiple service on one server…).
So after I got the abomination called Windows Admin Center running(which is actually a quite nice application, but I’m still a bit pissed about the lost time due to the connection error), I could finally manage the S2D cluster.

Bummer one, I have 3 S2D nodes and had a whole plan worked out using mirroring to protect data, turns out I could only use 3-way mirroring. Down goes the usable storage capacity.

Bummer two, S2D keeps a portion of storage “reserved” for repairs etc, when disks break. This is nice to have, but it was not in my planned calculations. I could not disable it, so once again, down goes the usable storage capacity…

Then, I did not plan to use it for backing Hyper-V, I actually was thinking about exporting iSCSI volumes from it (Windows has a built-in iSCSI target service after all). Turns out you can’t combine these two… (bummer three). I quickly figured I could deploy a VM in a high available hyper-v cluster, on my S2D storage, and share the iSCSI disks from there. It would probably work, but it would take this way to far… (I honestly don’t know how the thin clients would cope with nested virtualization).

That on top of the fact that I could use only about 33% of the raw capacity (I also couldn’t select parity raid), I kind of ditched the idea. So now I have my Windows lab idle again, waiting for a new experiment…

This is starting to look a bit like a rant… so how did Storage Spaces Direct feel?

Well, to be honest, at first I was impressed it even worked with the old devices I tossed at it! Performance was bad, but in line with what you could expect from old USB drives clustered together by a couple of thin clients. Kudos for that! I know that the version of ScaleIO I tested a couple of years ago wouldn’t have done it, that’s for sure.

As said at the start of this post, Storage Spaces Direct is a good, sturdy solution to host your Hyper-V VMs on, or maybe an SMB share throughout your domain. But do keep it in the Microsoft ecosystem. Prepare to get creative if you want to do other things with it. But it most certainly is a good tool to add to your toolbox.
Downside? The amount of raw storage it seems to need. But in return you get a serious highly available system.
(Just don’t deploy it at home on shabby hardware.)

As always, YMMV!

Windows 2019 Storage Spaces Direct on non certified hardware

So, quick update…

When you try to deploy Storage Spaces Direct on your Windows 2019 VMs or non-certified servers, you are presented with a nice red error in PowerShell:

Enable-ClusterS2D : Microsoft recommends deploying SDDC on WSSD [https://www.microsoft.com/en-us/cloud-platform/softwar
e-defined-datacenter] certified hardware offerings for production environments. The WSSD offerings will be pre-validate
d on Windows Server 2019 in the coming months. In the meantime, we are making the SDDC bits available early to Windows
Server 2019 Insiders to allow for testing and evaluation in preparation for WSSD certified hardware becoming available.
 Customers interested in upgrading existing WSSD environments to Windows Server 2019 should contact Microsoft for recom
mendations on how to proceed. Please call Microsoft support [https://support.microsoft.com/en-us/help/4051701/global-cu
stomer-service-phone-numbers].
    + CategoryInfo          : InvalidOperation: (MSCluster_StorageSpacesDirect:root/MSCLUSTER/...ageSpacesDirect) [Ena
   ble-ClusterStorageSpacesDirect], CimException
    + FullyQualifiedErrorId : HRESULT 0x80070032,Enable-ClusterStorageSpacesDirect

Enable-ClusterS2D : Failed to run CIM method EnableStorageSpacesDirect on the root/MSCLUSTER/MSCluster_StorageSpacesDir
ect CIM object.  The CIM method returned the following error code: 50
    + CategoryInfo          : InvalidResult: (MSCluster_StorageSpacesDirect:String) [Enable-ClusterStorageSpacesDirect
   ], CimJobException
    + FullyQualifiedErrorId : CimJob_EnableStorageSpacesDirect_50,Enable-ClusterStorageSpacesDirect

Thank you Microsoft, just want to make use of the feature that is part of the software I’m using…

So here is the registry key to get past this message and enable S2D on your non-certified server (only needed on the node where you are performing the installation):

New-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Services\ClusSvc\Parameters" -Name S2D -Value 1 -PropertyType DWORD -Force


Have Fun!
As always, no warranties provided, use with caution. (This is a lab, not production.)

VMware workstation + Hyper-V

Just a small announcement:

VMware workstation 15.5 is now compatible with Hyper-V of Windows 10 build 19041.264 (probably and up)!

Which means, to get it working you’ll have to update your windows to the latest patch level, which may not have been rolled out in your region yet. If so, the update assistant can be found here:

https://www.microsoft.com/en-us/software-download/windows10

The VMware workstation pro release can be downloaded at VMware and accepts your current Workstation pro license. It can be found here:

https://www.vmware.com/products/workstation-pro.html

Which means you can run Hyper-V, VMware Workstation and WSL2 on your Windows 10 desktop! (But you don’t want to see the pile of network adapters it created, I’m currently at 18 items in the network connections screen.)

And concerning WSL, you can convert your WSL1 distro to WSL2 and take all your work with you. Only thing that I’ve discovered to be broken was my X-forwarding setup, but that’s to be expected.

https://docs.microsoft.com/en-us/windows/wsl/install-win10#set-your-distribution-version-to-wsl-1-or-wsl-2

Windows 2019 VM activation

So what have I been up to lately?

I’ve been taking a look at Windows Server, once again. Normally I do all my experimenting etc on Linux. But now I got the ability to use some Windows server 2019 datacenter licenses, so why not give it a try an deploy a small Windows cluster?
Ofcourse the hardware of choice are beefed HP T620 Thin Clients.

After deploying Hyper-V, I discovered Windows has this interesting thing called VM activation. I was always wondering how retail keys behaved when you enter them in your VMs, turns out you don’t. I never gave it a thought while deploying Volume license keys over VMware infrastructures, but turns out that on Hyper-V your host can license the guest!

More about this can be found on the following page:

https://docs.microsoft.com/en-us/windows-server/get-started-19/vm-activation-19

But it basically boils down to this command prompt command (Windows 2019 datacenter):

slmgr /ipk YOUR-WINDOWS-VERSION-KEY-HERE

Activation status can be checked with:

slmgr /dlv

Fyi, more info on slmgr can be found here:

https://docs.microsoft.com/en-us/windows-server/get-started/activation-slmgr-vbs-options

HP Thin Client T620

I’ve been using Raspberry PI’s for various experiments at home. A few years ago a bought an Intel NUC, and I’ve been running Windows 10 Pro with Hyper-V on it.
To be honest, I wasn’t very happy about the performance of my raspberry pi’s anymore, and I was quiet happy about what I could do with the NUC (all while my wife uses it to watch TV). But I also found the NUC to be a bit expensive for fire and forget experiments.

I then stumbled upon the HP T620 Thin client. It bundled everything I wanted for a cheap (second hand on ebay, of course) price. It has an AMD x86_64 processor onboard, 4GB of RAM and 16GB of storage.

At that price level I didn’t expect to much from it from the performance viewpoint, but I was really impressed. I’m currently running an OPNsense router from it, completely with VPN tunnels and intrusion detection systems. It does it without any problems. Only downside for this scenario is the single Ethernet port. (I planned to try to replace the wireless interface, which is not supported by OPNsense with an Ethernet interface, but haven’t gotten to it.) However, this can easily be solved by using multiple VLANs at that one port.

So if you are looking for cheap second hand systems for your homelab that don’t consume to much power, look this one up on eBay!

PS: My raspberry pi’s which have been relieved from duty have already been claimed again to function as VPN endpoints/routers. Maybe more on that later…