Homelab – Services and Software

In the final installment of this series I would like to go thru some of the software used in my homelab and how they are used.

The following few posts I will go thru the Setup and Building of all the solutions mentioned here.

I started with ESXi 7U2 deployed on all 4 of my Hosts. My host used as a Management cluster will host the below.

  • Windows 10 VM: Used as a Jump Box to manage my lab from. All the software and tools will be installed here. everything can be done from my Laptop as well but I prefer this route to keep my lab portable / standard even when my own tools change.
  • Windows Server 2022: This will be my primary Active Directory Server. DNS will also run on this DC.
  • Windows Server 2022: This is used as my Microsoft Certificate Authority
  • Virtual Center Server Appliance: Virtual Center to manage my Lab from.

All 3 servers are hosted on the internal SSD and set to Auto start on host startup.

I then Setup my Synology NAS. I used 4 x 4TB WD Red Drives configured in RAID10. My NAS Hosts the below Services

  • NTP for my LAB
  • iSCSI Storage for VM’s
  • NFS used for Kubernetes and also ISO images. I also have a NFS Share which Hosts the VMware Content Library.
  • Active Backup for Business used for backups of All Critical VM’s.
  • Web Server Used for distribution my Proxy .wpad file.
  • Internet Proxy Server

Next up was the 3 Node vSAN Cluster, Also using ESXi 7U2. these were added to my virtual center using dns names.

I moved all networking to two separate Virtual Distributed Switches one used for normal traffic and the second for vsan.

Now it was time to get some redundancy up for my Domain controller adding a second Windows 2022 server.

At this point we have all the core components in place to start building our business. Initially I will also, at this point, build myself a few Templates for Ubuntu and Server 2022 just to make my life easier later on, I will in a later post show how this can be automated.

To make the lab more functional I need some additional tools. First of which would be VMware Lifecycle manger to assist with the deployments of all other tools. I then Go ahead from there to deploy;

  • vRealize Operations manager – Monitoring and Troubleshooting
  • vRealize Log Insight – Central Log Management
  • vRealize Automation – Automating everything
  • vRealize Saltstack Config – Configuration Management for servers

Once vRealize Automation is setup and usefull I deployed 6 Ubuntu VM’s to be be used as Docker hosts for the below

  • Services Host:
    • Portainer for Docker Management GUI
    • UptimeKuma for Internet and Services Monitoring
  • NGINX Proxy Manager for reverse Proxy and SSL Certs
  • Harbor Container Registry

The other 3 Docker VM’s are used to play around with different applications

And finally time to get some Kubernetes Clusters up. I used Ranchers K3s to build a 3 Node cluster for “testing” and also spun up a Tanzu Community Edition cluster for “testing”.

That is my Homelab. It should allow for any testing I need to do for Work or Home use. I use various Tools at different times as I learn new things or hear about a project I find interesting. If this was a Business it would be ready to get departments to deploy their applications. There are still security tools missing and few tools used in Enterprises which does not make sense for me to run in my Homelab due to their massive resource requirements or dependencies.


Homelab – Network Design

For my Homelab Network Conceptual Design I split my Home into two Zones, each behind its own firewall. On the Home side I would segregate Devices in to 3 Categories.

  • Dodgy – This would mean devices I do not not trust at all, so any IOT device which I do not have control over. They would only be allowed to talk to the internet and use my PiHole for DNS.
  • Dodgy but have to live with it – This would be device that need either multicast traffic to other networks or reside on a open vlan to function. These include services like Plex or even Phillips Hue. The network would be Controlled with minimum access allowed to my Secure Network.
  • Secure – This is where my Laptops, Phones and NAS Reside.

Lab Network

For my Lab network I would try to replicate a Corporate network as close as possible. I would segregate and group traffic based on function or broadcast domains. I would also create two internal DMZ VLANs that hairpin thru the firewall. One would be used for External Services and one for Internal services(Testing before moving it external)

I allocated Subnets in the range to my Homelab. I would then sub assign these in groups of 10 Class C subnets to each Larger Group ex. to 10-70.19.0/24 would be allocated to Management VLANs. I would then go down in each of the App/DB/Web Groups and allocate /26 networks to each of Dev, Test and Prod. I would also reserve some smaller scopes for dedicated VLAN’s for stuff like TKG or vRealize.

In a proper network there would be firewall rules in place to separate all of the above but I am a bit too lazy to do that yet. IP addressing is handled by the UniFi Dream Machine and the Scopes are defined to distribute the DNS IP’s of my Microsoft DNS Servers as well as NTP which is running on my Synology NAS.


As far as the two DMZ’s go they are properly Firewalled off from the rest of the environment with outbound rules only created on a per IP and Port basis as needed. There is a Port Forward Rule on the Dream Machine which will forward port 443 traffic to a Instances of NGINX running as a Reverse Proxy in the DMZ, that means all traffic into my Lab will always originate from that Point. I am also running some other Load balancers/ WAF/Reverse Proxy solutions in the DMZ for Testing.

The Internal DMZ has the same Firewall rules but the inbound traffic will be from my Home/Lab and use internal DNS Entries and Certificates. this allows me to test firewall rules and solutions internally before exposing them to the internet.

In my Lab I do not use any DNS filtering like PiHole or Adguard but I do still block outbound DNS to ensure all services use my internal DNS. All server do have Internet access just for ease of used. I do have a Proxy server running on my Synology NAS but this is currently not used anymore. My DNS is still setup to supply the necessary details on some DHCP scopes but I have moved away from the proxy Solution.


iSCSI and NFS Storage is running over the Management network currently as I only have two very unutilized VM’s using iSCSI. NFS is mostly used to store ISO images and as Shared storage for my Kubernetes Clusters. My vSAN Network us running on a single non routed subnet local to my Cisco SG300. There is two uplinks per hosts but only a single switch, which luckily never restarted on its own. This is to be changed when doing my 10GB SFP+ upgrade.

Backups are also running over the management networks to my Synology NAS.

I have a VLAN created for Remote access VPN with the intension of expanding my Homelab into either Azure or AWS but currently the cost of running the VPN devices in either cloud is a bit too high.

Up Next will be a Summary of what will be running in My Lab


Homelab – Physical Design

The design of my new home network design was guided by the requirement for Internet stability and Segregation between HomeLab and Home Network as far as possible.

Home Networking Design

I decided to have my internet come into my Firewall for the Home Lab. This was to ensure that when I do decide to Expose services to the internet my Attack surface for my Home would be Reduced due to the 2nd firewall between my Lab and Home Network. Part of the risk I still have would be Man in the Middle attacks which reside in my HomeLab. To mitigate this I would run all my Home Network traffic thru a VPN service with the connection established on my Internal Firewall. An added benefit is that all Lab service are Internet Side of my Internal Firewall and I do not need to VPN into my Lab.

The second Firewall is a Dream Router sitting downstream from the Dream Machine in my Lab

My Home network has Wifi From the Dream Router Covering half of the house and the other half covered by a Unifi Nano HD. Every room has a Network Point cabled from the Central Switch which reside in the same Closet as the Dream Router, from there I use the small unifi mini switches to my devices. I prefer to used Cabled over Wifi where possible


My NAS is used as a Backup location for Our PC’s as well as a Media Server using Plex. I also use my NAS as a Intermediary between Cloud Storage we use in Dropbox/Google Drive/OneDrive and a long term backup in AWS Glacier. We have our most important Data in each respective cloud as well as on my NAS locally. a Subset of Really Important Data (Family Photos etc..) are being backed up to AWS Glacier.


DNS for my Home is Provided by 2 Raspberry Pi’s running PiHole. Both are running in Docker with domain forwarding setup for my Lab Domain to enable me to resolve hostnames in my lab from my home network. DNS is supplied by DHCP from the Unifi Dream Machine and all traffic on port 53 are blocked in my network except to my own DNS servers.

I run a small 4 Node K3S cluster on a set of Raspberry Pi’s on my Internal network. Not sure what to do with them yet as I tend to break them every Time I touch them.

HomeLab Networking Design

From my Dream Machine in the Lab I Use one Physical port for my Downstream Dream Router and my Home. Two ports from the Dream Machine are used as Uplinks to my two 8 Port Unifi Switches and the Last port is the Uplink to my Cisco SG300 used for Out of Band Networking.

There is ISL between the two Unifi Switches with RSTP value lower than the uplink to the Dream Machine. This was done as the Switch in the Dream Machine does not seem to support Jumbo Frames.

Each host and my NAS is Patched to each Unifi Switch. Additionally the three Workload Nodes have two Nic’s Patched to the Cisco Out of Band Switch which is used for vSAN traffic.

All caballing is done using flat Cat6 cables and at this time is still a real mess.

Future Plans for my Homelab Networking Design

I have a 10GBe upgrade planned as Budget opens up and Time is allocated. I will be replacing the Cisco SG300 with a set of 4 port Microtik 10GB Sfp+ switches. Which will have its Own ISL and uplink to the two Unifi switches. My vSAN and vMotion traffic will run over the 10GB network an I will keep my VM’s on the 1GB.


Ensuring that I prevent any Dependencies on my Homelab from my Home network upped the WAF(Wife Acceptance Factor) a lot. I currently only need to schedule a maintenance window if I need to update the Firmware or Software on the Unifi Kit. The Dream Machine does have a Small network drop when modifying or creating VLAN’s but on the wired connection this is only a single ping. But for the WiFi this causes a disconnect. Because the Downstream Dream Router is Wired there is nearly no impact. I bravely Tested this during one of my Wife’s Video Calls and the fact that I am writing this now, is proof that it works.

Next up we will look at how it all pieces together.


Homelab – Bill of Material

I will split the Bill of Material between what is used for my House and what is used for my Lab. These two are related but I tried keeping them separate as far as possible. I also reused as much of the Hardware I already own and for which the resale value is really low.

My Home was Already Wired with CAT6a to every room from the central patch cabinet and also Cable Internet connections in every room which can be patch from a central Patch Cabinet but only to 1 room.

Home Hardware

On The Home side I standardized on Ubiquity Hardware.

  • Router/Firewall – Ubiquity Unifi Dream Router (EA)
  • Core Switch – Ubiquity Unifi Switch 8P POE 60w
  • Lounge – Ubiquity Unifi Switch Mini 5
  • Office – Ubiquity Unifi Switch 8P
  • Wifi Extension – Ubiquity Unifi AP Nano HD
  • Rapberry PiCluster – Ubiquity Unifi Switch Mini 5
  • Primary DNS – Raspberry Pi 3b
  • Secondary DNS – Raspberry Pi 4 2GB
  • Storage – Synology DS915+

Lab Hardware

For my Lab the Hardware is a bit more mix and my intension is to keep it mixed up a bit with the future 10GBe upgrade. I kept my Out of Band Networking intentionally of of Ubiquity hardware due to there frequent update cycles and lower stability in my case compared to other Brands. To buy my Homelab all new today would costs around $8000.

  • Router/Firewall – Ubiquity Unifi Dream Machine
  • Primary Switch – Ubiquity Unifi Switch 8P
  • Secondary Switch – Ubiquity Unifi Switch 8P
  • Out of Band Switch – Cisco SG300-10
  • Storage – Synology DS415+ with
    • 4 x Western Digital Red 4TB HDD
  • Management Cluster:
    • 1 x SuperMicro E300-8D with
      • 2 x 32GB Corsair Vengeance LPX
      • 1 x Western Digital SN550 1TB NVMe SSD
  • Workload Cluster:
    • 3 x SuperMicro E301-9D-8CN4 each with
      • 4 x 32GB Corsair Vengeance LPX
      • 1 x Western Digital Red SN700 500GB NVMe SSD
      • 2 x Samsung 870 QVO 2TB SATA III SSD


I currently do not have any Power Redundancy or backup power. I general the power here is extremely stable except when I short it out with the Toaster, or switch off the wrong smart plug. Part of the redesign was due the the constant need to rebuild my vCenter or appliances due to storage corruption.

The SuperMicro E301-9D is not on the VMware HCL. This was done due to the higher core count available on the EPYC CPU’s over the E300-8D’s Intel CPU’s. The components that usually cause issues like Network and Storage are Server class and not consumer so my hope would be for some longer support life on them.

Another thing to think about is that the SuperMicro server does come with expansion options but the mounting brackets are not included in the initial purchase.

In the next part I will go thru my Physical and Logical Designs.


Homelab – Requirements

After Multiple bad Homelab builds I sat down this time to look at what I Really need and also what fits my Budget and most importantly what my Family Will accept a a Minimum Viable Product.

As my Homelab will form part of my home network and there will inevitably be some crossover between the two I would need to ensure that anything used by my Family are Both Secure and Reliable while my homelab could be a bit more in flux (aka Broken).My primary use case for my Lab is to learn new Technologies and also test solutions before implementing them at work. a Lot of my design considerations revolved around creating a Lab to resemble a enterprise network as close as I could without having a six figure budget. The Technologies I will focus on for my design will include

  • ESXi as a Hypervisor using vSAN and iSCSI as Storage Solutions.
  • Distributed Switches to start with to be enhanced using NSX-t.
  • Microsoft Active Directory for DNS/Users/Certificates to be extended to Azure AD later on.
  • Ubuntu Linux for all services where possible.
  • Docker for Applications instead of dedicated VM’s.
  • Kubernetes for Applications based of Rancher or Tanzu Clusters instead of Docker.
  • Monitoring thru vRealize Operations and Grafana/Prometheus.
  • Logging into vRealize LogInsight and Splunk.
  • Configuration Management thru Saltstack and Ansible with Terraform for any other use cases.
  • User Frontend thru vRealize Automation.

As for budget I set myself a Growth Budget of $2000 plus whatever I could get for my old lab hardware and clearing out any old Gear and stuff I had laying around.

My List of requirements was.

  • Separation between Services used for Home/Family and Lab.
  • Ability to Expose services to the Internet.
  • Network Segregation.
  • Redundancy for Storage and Networking in Lab.
  • Redundancy for Home Services where Possible.
  • Mix of Redundant iSCSI/NFS as a possible solution for Kubernetes Storage.
  • vSAN for vSphere Environment.
  • Microsoft AD Environment.
  • Internal and External Certificate Support.
  • Selfhosted where possible


And as all designs there must always be some Constraints as well.

  • Low Power Consumption and low Noise
  • Limited Backup Availability.
  • As close to VMware HCL as I can get on Hardware.
  • Changes to my Lab should not impact Family
  • New Under Warranty hardware where possible

I also decided that some compromises would need to be made. Unfortunately I do not have a unlimited budget and the hardware should have a usable lifespan of about 2 years. I tend to sell my old hardware while they are still a bit relevant. I found that this gives me the lowest cost of ownership and also the ability to have a fairly modern lab.

Part of any lab build will be the cost for software. Now most of the software out there you could use on a trial basis but that requires a constant rebuild of solutions. For my Software Requirements I went for the below.

  • Microsoft – I got a Microsoft Visual Studio Dev Essentials Subscription. This works out to about $45pm. My reasoning behind it is that I get all this back in Azure Credits which I could use to expand my Azure Cloud Experience. This would then Cover all my Microsoft licensing, from SQL to Server to desktops.
  • VMware – VMUG advantage is the solution here. With nearly every On prem Product covered at $200pa this was a No Brainer for me.
  • Linux – I decided to go for Ubuntu as far as possible.

With Work from Home now the norm in our House Internet/WiFi stability is critical, not to mention the User anger I need to face when Youtube just thinks about buffering.

In the next part I will go over the Hardware Choices and Pricing for my Homelab.


Homelab – My Homelab History

My homelab history started about 7 years ago. My first homelab had little design considerations. Maybe about a week or so of reading blogs and watching some video’s before I decided to go for Intel NUC Systems. The initial build would consist of 3 Core i3 NUC’s with 32GB ram each and no internal Storage. Storage would be Provided by a Synology 4 Bay NAS that could double as a Media and File Server for our home. Networking was provided by a single Cisco 10 Ports GBe switch.

The tab was not too bad learned a lot about iSCSI, Customizing ESXi images for use with the NUC, and How slow 4 NAS drives are in RAID 5. As a Bad fix I decided to go for VSAN as a new Storage solution. Thus bought some NVMe SSD’s and 500GB WD Black HDD’s and started up my VSAN. This taught me even more about how Networking can mess your lab up badly.

Enter Revision 3. This time adding some USB NIC’s and also a Second Switch. And that is how I learned the troubles of Interswitch Links and Bottlenecks, Single Points of failures in using my L3 Cisco switch as a router. I essentially just created a bigger problem than I originally had with iSCSI.

Revision 4 would see me move away from NUC based systems to a White Box solution. The idea was to get away from the network issues and run 3 Node Nested Clusters on NVMe. I bought 2 Hosts, both Intel Based with 128GB and 64GB Ram respectively. Moved my Lab Services to Nested vSAN Clusters. this worked Ok but the additional CPU overhead of 3 ESXi hosts on a Single Physical made all work in my Lab slow. I was also dipping my toes into vRealize Automation and NSX which meant I needed more Power!

Mistake 5 would see me throw out the Intel Systems for a Ryzen 7 and Thread ripper Hosts. Now I had more CPU cores that I could ever use, 256GB Ram and Plenty of nice Fast NVMe storage. Yet it was crap again. Stability of Ryzen under ESXi was horrible. It was so bad that I used one of my unsold old NUC’s as a “Management Cluster” to Host my Domain Controller and Virtual Center on local Disks. My NAS made a Comeback due to Storage Corruption Issues on the nested vSAN nodes caused by nearly daily random reboots.

Solution 6 For my latest iteration of my Homelab I went thru a more extensive design process. I would like to take you thru this process in the next few posts. I should have done this years ago.

Exit mobile version