VCF Home Lab Part 1: Hardware & Planning

I’ve been thinking a lot about how I wanted to go about building a VMware Cloud Foundation home lab for quite a while. My current home lab workstation isn’t up to the task as it didn’t meet the minimum specs to run VCF:

 

  • Minimum CPU Cores: 16 physical CPU cores, 24+ cores strongly recommended for usability
  • 2–3 ESXi hosts (physical or nested) 
  • Adequate CPU/RAM (e.g., 128 GB+ per node recommended depending on nested sizing)
  • Shared storage (NFS) if limited hosts
  • Network connectivity (10 Gbe recommended but can bypass that requirement)

 

I looked at the cheapest option which was to buy a second hand HPE Z6 G4 workstation with as much hardware as I could afford and then run a nested environment as I have before (using VMware’s Holodeck). The other route was to go down the Minisforum MS-A2 route, as per William Lam’s blog. I decided to go down this route as even though it was more expensive it took up less room (keeps the wife happy 😉)  and pushes myself knowledge wise to establish. Also I had seen them at VMware Explore London and talked with Eric Nielsen in depth to understand how it was setup. As there was quite a lot of knowledge on these devices I decided to bite the bullet and go down this route. 

 

The full hardware list purchased was:

Quantity

Item

Function

2

Minisforum MS-A2 (7945HX) Barebones

VCF Host

2

Crucial 96B Kit (2x48GB) DDR5 SODIMM

ESXi Memory

2

Lexar NM610 Pro 500GB 
SSD

ESXi Install + ESX-OS-Data + VMFS Volume

2

Crucial P310 1TB SSD

NVMe Tiering

2

Crucial P310 2TB SSD

vSAN ESA

 

IMG 4358

 

The most expensive thing on the list was the RAM which recently has increased in price massively (seems to keep on growing as well). The price for 128 GB of RAM for each host was just too much, so I opted for 96 GB per host. As we now have NVMe tiering which uses an NVMe disk as substitute RAM (hence the 1TB NVMe disks) this will hopefully alleviate this issue. So this means that we have the following for our home lab:

 

  • 64 vCPU total, 32 vCPU per host ( a minimum of 24 vCPU are needed for VCF Automation alone!)
  • 96 GB of physical RAM per host (1 TB with NVMe Tiering)
  • 500 GB NVMe: ESXi installation, 1TB NVMe : NVMe Tiering, 2TB NVMe 3: vSAN ESA
  • 2 x 10 Gbe, and 1 x 2.5 Gbe (each host has 2 x 2.5 Gbe but one is a Realtek network controller so won’t work with ESX)

 

My idea is to run this like in the following diagram:

 

UntitledImage

 

Previously I hadn’t planned my home lab previous build so this time Im looking to plan and document it first. First I planned the VLANs configured for my networks:

VLAN
Number

Traffic
Type

100

Management\Hosts

110

vMotion

120

vSAN

130

ESX\NSX Edge TEP

140

Tier 0 Uplink (Optional)

 

And I also planned the initial VMs and their roles:

Hostname

FQDN

Function

DC01

dc01.gnet.local

DC, DHCP, DNS, NTP

ESX01

esx01.gnet.local

Physical ESX Server

ESX02

esx02.gnet.local

Physical ESX Server

SDDC01

sddc01.gnet.local

VCF Installer\SDDC Manager

VC01

vc01.gnet.local

vCenter (Management Domain)

VCF01

vcf.gnet.local

VCF Operations

NSX01

nsx01.gnet.local

NSX Manager VIP for Management
Domain

NSXM01

nsxm01.gnet.local

NSX Manager for Management Domain

EDGE01

edge01.gnet.local

NSX Edge 1 for Management Domain

EDGE02

edge02.gnet.local

NSX Edge 2 for Management Domain

OPSFM01

opsfm01.gnet.local

VCF Operations Fleet Manager

AUT01

aut01.gnet.local

VCF Automation

VCFOD

vcfod01.gnet.local

VCF Offline Depot (Ubuntu)

VC02

vc02.gnet.local

vCenter Server (Workload Domain)

NSX02

nsx02.gnet.local

NSX Manager (Workload Domain)

As mentioned previously, the planning element is something I’ve not done beforehand, and as I understand it the installation of VCF requires a lot of information. I’m hoping that this will make life easier. Especially if a rebuild is needed at any point.

 I already have a small mini desktop machine that I’m using with Proxmox to host a Domain Controller (to provide AD, DHCP,DNS, & NTP) and an Ubuntu server that will host the VCF Offline Bundle repository so that it doesn’t have to be downloaded should I need to rebuild it for any reason. The Ubuntu VM has a 1TB SSD attached which Im hoping will be enough. Now we have everything ready I can start configuring the Offline Bundle Depot download and the ESX build on the Minis Forum. Stay tuned!