Homelab – Bill of Material

I will split the Bill of Material between what is used for my House and what is used for my Lab. These two are related but I tried keeping them separate as far as possible. I also reused as much of the Hardware I already own and for which the resale value is really low.

My Home was Already Wired with CAT6a to every room from the central patch cabinet and also Cable Internet connections in every room which can be patch from a central Patch Cabinet but only to 1 room.

Home Hardware

On The Home side I standardized on Ubiquity Hardware.

  • Router/Firewall – Ubiquity Unifi Dream Router (EA)
  • Core Switch – Ubiquity Unifi Switch 8P POE 60w
  • Lounge – Ubiquity Unifi Switch Mini 5
  • Office – Ubiquity Unifi Switch 8P
  • Wifi Extension – Ubiquity Unifi AP Nano HD
  • Rapberry PiCluster – Ubiquity Unifi Switch Mini 5
  • Primary DNS – Raspberry Pi 3b
  • Secondary DNS – Raspberry Pi 4 2GB
  • Storage – Synology DS915+

Lab Hardware

For my Lab the Hardware is a bit more mix and my intension is to keep it mixed up a bit with the future 10GBe upgrade. I kept my Out of Band Networking intentionally of of Ubiquity hardware due to there frequent update cycles and lower stability in my case compared to other Brands. To buy my Homelab all new today would costs around $8000.

  • Router/Firewall – Ubiquity Unifi Dream Machine
  • Primary Switch – Ubiquity Unifi Switch 8P
  • Secondary Switch – Ubiquity Unifi Switch 8P
  • Out of Band Switch – Cisco SG300-10
  • Storage – Synology DS415+ with
    • 4 x Western Digital Red 4TB HDD
  • Management Cluster:
    • 1 x SuperMicro E300-8D with
      • 2 x 32GB Corsair Vengeance LPX
      • 1 x Western Digital SN550 1TB NVMe SSD
  • Workload Cluster:
    • 3 x SuperMicro E301-9D-8CN4 each with
      • 4 x 32GB Corsair Vengeance LPX
      • 1 x Western Digital Red SN700 500GB NVMe SSD
      • 2 x Samsung 870 QVO 2TB SATA III SSD


I currently do not have any Power Redundancy or backup power. I general the power here is extremely stable except when I short it out with the Toaster, or switch off the wrong smart plug. Part of the redesign was due the the constant need to rebuild my vCenter or appliances due to storage corruption.

The SuperMicro E301-9D is not on the VMware HCL. This was done due to the higher core count available on the EPYC CPU’s over the E300-8D’s Intel CPU’s. The components that usually cause issues like Network and Storage are Server class and not consumer so my hope would be for some longer support life on them.

Another thing to think about is that the SuperMicro server does come with expansion options but the mounting brackets are not included in the initial purchase.

In the next part I will go thru my Physical and Logical Designs.


Homelab – Requirements

After Multiple bad Homelab builds I sat down this time to look at what I Really need and also what fits my Budget and most importantly what my Family Will accept a a Minimum Viable Product.

As my Homelab will form part of my home network and there will inevitably be some crossover between the two I would need to ensure that anything used by my Family are Both Secure and Reliable while my homelab could be a bit more in flux (aka Broken).My primary use case for my Lab is to learn new Technologies and also test solutions before implementing them at work. a Lot of my design considerations revolved around creating a Lab to resemble a enterprise network as close as I could without having a six figure budget. The Technologies I will focus on for my design will include

  • ESXi as a Hypervisor using vSAN and iSCSI as Storage Solutions.
  • Distributed Switches to start with to be enhanced using NSX-t.
  • Microsoft Active Directory for DNS/Users/Certificates to be extended to Azure AD later on.
  • Ubuntu Linux for all services where possible.
  • Docker for Applications instead of dedicated VM’s.
  • Kubernetes for Applications based of Rancher or Tanzu Clusters instead of Docker.
  • Monitoring thru vRealize Operations and Grafana/Prometheus.
  • Logging into vRealize LogInsight and Splunk.
  • Configuration Management thru Saltstack and Ansible with Terraform for any other use cases.
  • User Frontend thru vRealize Automation.

As for budget I set myself a Growth Budget of $2000 plus whatever I could get for my old lab hardware and clearing out any old Gear and stuff I had laying around.

My List of requirements was.

  • Separation between Services used for Home/Family and Lab.
  • Ability to Expose services to the Internet.
  • Network Segregation.
  • Redundancy for Storage and Networking in Lab.
  • Redundancy for Home Services where Possible.
  • Mix of Redundant iSCSI/NFS as a possible solution for Kubernetes Storage.
  • vSAN for vSphere Environment.
  • Microsoft AD Environment.
  • Internal and External Certificate Support.
  • Selfhosted where possible


And as all designs there must always be some Constraints as well.

  • Low Power Consumption and low Noise
  • Limited Backup Availability.
  • As close to VMware HCL as I can get on Hardware.
  • Changes to my Lab should not impact Family
  • New Under Warranty hardware where possible

I also decided that some compromises would need to be made. Unfortunately I do not have a unlimited budget and the hardware should have a usable lifespan of about 2 years. I tend to sell my old hardware while they are still a bit relevant. I found that this gives me the lowest cost of ownership and also the ability to have a fairly modern lab.

Part of any lab build will be the cost for software. Now most of the software out there you could use on a trial basis but that requires a constant rebuild of solutions. For my Software Requirements I went for the below.

  • Microsoft – I got a Microsoft Visual Studio Dev Essentials Subscription. This works out to about $45pm. My reasoning behind it is that I get all this back in Azure Credits which I could use to expand my Azure Cloud Experience. This would then Cover all my Microsoft licensing, from SQL to Server to desktops.
  • VMware – VMUG advantage is the solution here. With nearly every On prem Product covered at $200pa this was a No Brainer for me.
  • Linux – I decided to go for Ubuntu as far as possible.

With Work from Home now the norm in our House Internet/WiFi stability is critical, not to mention the User anger I need to face when Youtube just thinks about buffering.

In the next part I will go over the Hardware Choices and Pricing for my Homelab.


Homelab – My Homelab History

My homelab history started about 7 years ago. My first homelab had little design considerations. Maybe about a week or so of reading blogs and watching some video’s before I decided to go for Intel NUC Systems. The initial build would consist of 3 Core i3 NUC’s with 32GB ram each and no internal Storage. Storage would be Provided by a Synology 4 Bay NAS that could double as a Media and File Server for our home. Networking was provided by a single Cisco 10 Ports GBe switch.

The tab was not too bad learned a lot about iSCSI, Customizing ESXi images for use with the NUC, and How slow 4 NAS drives are in RAID 5. As a Bad fix I decided to go for VSAN as a new Storage solution. Thus bought some NVMe SSD’s and 500GB WD Black HDD’s and started up my VSAN. This taught me even more about how Networking can mess your lab up badly.

Enter Revision 3. This time adding some USB NIC’s and also a Second Switch. And that is how I learned the troubles of Interswitch Links and Bottlenecks, Single Points of failures in using my L3 Cisco switch as a router. I essentially just created a bigger problem than I originally had with iSCSI.

Revision 4 would see me move away from NUC based systems to a White Box solution. The idea was to get away from the network issues and run 3 Node Nested Clusters on NVMe. I bought 2 Hosts, both Intel Based with 128GB and 64GB Ram respectively. Moved my Lab Services to Nested vSAN Clusters. this worked Ok but the additional CPU overhead of 3 ESXi hosts on a Single Physical made all work in my Lab slow. I was also dipping my toes into vRealize Automation and NSX which meant I needed more Power!

Mistake 5 would see me throw out the Intel Systems for a Ryzen 7 and Thread ripper Hosts. Now I had more CPU cores that I could ever use, 256GB Ram and Plenty of nice Fast NVMe storage. Yet it was crap again. Stability of Ryzen under ESXi was horrible. It was so bad that I used one of my unsold old NUC’s as a “Management Cluster” to Host my Domain Controller and Virtual Center on local Disks. My NAS made a Comeback due to Storage Corruption Issues on the nested vSAN nodes caused by nearly daily random reboots.

Solution 6 For my latest iteration of my Homelab I went thru a more extensive design process. I would like to take you thru this process in the next few posts. I should have done this years ago.

Exit mobile version