proxmox

Part 1: Introduction to my Homelab Proxmox Micro Cluster

This entry is part 1 of 3 in the series A Beginner’s Proxmox Cluster: From Single Node to HA

For the past five years, I had been running a single-node Proxmox machine at home on a Lenovo M720q micro form factor PC. Honestly, it was one of the best decisions I made for my home network setup. I started out hosting pfSense on it, then eventually moved to OPNsense as my router and firewall of choice.

The whole thing was inspired by a handful of early micro-server pioneers in the homelabbing community, and really took shape after discovering ServeTheHome’s legendary TinyMiniMicro Homelab series. If you haven’t come across that series before, it’s basically a deep dive into small, energy-efficient machines that punch well above their weight, perfect for running a home lab without your electricity bill going through the roof.

The beauty of this setup is how ridiculously low-power it is, while still delivering a fully open-source router and firewall. A cheap HP NC365T quad-port NIC handled the networking side of things. This four-port 1Gigabit RJ45 NIC is built around Intel’s i340-T4 chipset, which continues to have rock-solid Linux / FreeBSD driver support and plays nicely with OPNsense out of the box.

Five years in and the lure of cheap 2.5Gb-10Gb networking was too much of a temptation to ignore. I decided to go all-in… tear down the old router setup and rebuild everything from scratch, this time as a proper multi-node Proxmox cluster.

What is Proxmox and Why Use It?

Proxmox VE is a free, open-source virtualisation platform used everywhere from enterprise data centres to home labs. It combines two tested technologies, KVM for full virtual machines and LXC for lightweight containers, all managed through a surprisingly polished web-based interface. From a single browser tab you can spin up VMs and containers, monitor resources, manage backups, configure networking, and a whole lot more.

proxmox-dashboard

Where Proxmox really starts to shine though, is when you cluster multiple nodes together. Rather than running everything on one machine, a cluster links several computers to act as a single, unified system, spreading the workload and adding resilience. The sweet spot for a home lab is three nodes, and there’s a good reason for that: Proxmox uses a quorum-based voting system to maintain stability, needing an odd number of nodes to always achieve a majority vote. With three nodes, if one goes down, the other two can determine which node went offline and decide what to do next.

Benefits of a 3-Node Proxmox Cluster

Running a three-node cluster unlocks some useful features that a single-node setup simply can’t offer. The big one is High Availability. If a node fails, Proxmox will automatically restart your VMs on one of the surviving nodes, eliminating the single points of failure that keep homelabbers up at night. Alongside that, Live Migration lets you move running VMs between nodes without any downtime at all, which is incredibly handy when you need to carry out hardware maintenance without taking services offline. All three nodes are controlled from the same unified web interface, so there’s no juggling multiple dashboards or connections. And when you eventually want more resources, scaling up is as simple as adding another node to the cluster.

Hardware Choice for a Beginner-Friendly Cluster: The Lenovo M720q Tiny

For the three nodes, I’m sticking with the Lenovo ThinkCentre M720q Tiny, a compact low-power micro-PC that’s become something of a favourite in the homelabbing community. It’s quiet, energy efficient (sub-10W at idle), and readily available second-hand for a very reasonable price. All my machines were purchased on eBay between £70-90 each. But the real reason it stands out from the crowd of other mini-PCs is a proprietary PCIe riser slot tucked inside the chassis, something most micro-PCs simply don’t support.

Lenovo Cluster X3

That PCIe slot is the killer feature here. It opens the door to fitting a low profile expansion card, which means you can drop in a proper 10GbE network card or HBA controller, rather than being stuck with whatever’s soldered onto the motherboard. Other Lenovo machines have similar features (the M910x for example having the additional bonus of two M2 expansion slots instead of one). Thankfully there is a STH Forum post comparing the myriad of options.

Before finalising hardware selection, I planned for the following nodes in my cluster:

  • Proxmox Server #1 – Core Networking: router (OPNsense), DNS server (Adguard/Pihole), reverse-proxy (Caddy/NGinx RP) and wireless controller (Unifi).
  • Proxmox Server #2 – Self-hosted Applications: dashboard, smart home (Home Assistant), document manager (Paperless NGX), photo manager (Immich), password manager.
  • Proxmox Server #3 – Storage / Backup NAS: NAS (TrueNAS), backups (ProxMox Backup Server), media server, cloud storage (NextCloud)

Here’s my recommended M720q hardware specifications:

  • CPU: The M720q supports 8th and 9th Gen Intel Core processors. The i5-8500T or i5-9500T (both 6 core, 6 thread) processors are excellent choices, offering a great balance of performance and low-power efficiency. Many second hand machines contain i3-8100T (4 core, 4 thread) which is still plenty-enough power for most compute tasks.
  • RAM: Up to 64GB DDR4 is supported via two SO-DIMM slots. 16GB per node is a solid starting point, but 32GB is is my optimum recommended amount for running a healthy mix of VMs and containers. Two sticks work best to utilise dual-channel bandwidth. With RAM prices skyrocketing since their initial surge in late 2024, the ongoing AI-driven market has made choosing an affordable capacity a major challenge in 2026.
  • Storage:
    • Boot/OS: For my boot drive, I’ve opted for a 16GB Intel Optane Drive connected via an M.2 A+E adapter in the often-redundant WiFi slot. Despite costing less than £5 on AliExpress, these drives utilise 3D XPoint technology to offer far superior write endurance compared to standard consumer NVMe SSDs. This is particularly vital for Proxmox, which is notorious for ‘shredding’ flash-based storage with constant logging and I/O, though configuration tweaks can help mitigate this wear.
    • VMs/LXCs: An M.2 NVMe drive for a fast, responsive VM/container environment.
    • Data/Backups: These family of micro-PCs also have an optional 2.5″ SATA expansion bay connected via a thin ribbon cable to the motherboard, providing an additional SSD to store your data and/or backups.
Optane

ASPM – Why it Matters (Especially in 2026!)

If you’re building a server that runs 24/7, power consumption isn’t just a nerdy footnote – it directly affects your electricity bill. In 2026, that matters more than ever. Ongoing conflicts in Ukraine and Iran have continued to drive energy prices up across Europe, and for those of us in the UK (where electricity cost £0.30 per kWh at peak), every watt counts. That’s where ASPM comes in.

ASPM (Active State Power Management) is a PCIe power management feature that allows expansion cards to drop into low-power states when they’re not actively being used, rather than sitting there drawing full power around the clock. It allows Intel processors to drop into lower power CPU states (C6-C10), lowering system-wide power consumption. It’s the kind of thing that’s easy to overlook when speccing out a build, but over months and years of continuous operation, the savings add up.

Unfortunately, old hardware often doesn’t support ASPM natively, especially NICs and HBA controllers. I initially installed an Intel X550-T2 NIC (a 2-port 10GbE Base-T RJ45 NIC) before coming across this highly referenced 2024 blog post by Z8, who explored this in great detail.

PowerTOP output for a non-ASPM NIC (Intel X550-T2) in a Lenovo M720Q (ave idle power 16.8W)
           Pkg(HW)  |            Core(HW) |            CPU(OS) 0
                    |                     | C0 active  14.3%
                    |                     | POLL        0.0%    0.0 ms
                    |                     | C1          1.1%    0.1 ms
C2 (pc2)    7.2%    |                     |
C3 (pc3)   52.7%    | C3 (cc3)    1.1%    | C3          1.5%    0.1 ms
C6 (pc6)    0.0%    | C6 (cc6)   12.8%    | C6         14.3%    0.5 ms
C7 (pc7)    0.0%    | C7 (cc7)   64.5%    | C7s         0.0%    0.0 ms
C8 (pc8)    0.0%    |                     | C8         20.7%    1.0 ms
C9 (pc9)    0.0%    |                     | C9          0.0%    0.0 ms
C10 (pc10)  0.0%    |                     |
                    |                     | C10        45.5%    6.6 ms
                    |                     | C1E         2.8%    0.1 ms

Expansion Cards – Networking and Storage

With that in mind, I was deliberate about picking expansion cards that support ASPM. The Intel X710-DA2 dual-port 10GbE SFP+ card was a natural fit. It is very low power (3W), has excellent driver support under Linux and, crucially, supports ASPM. For storage expansion, I went with a cheap ASM1166-based 6-port SATA PCIe card, which is similarly frugal (2-3W) with ASPM support.

Now, there are some reported issues between the ASM1166 and TrueNAS/ZFS, but for my use case that’s not a dealbreaker. I’m simply using it to deploy a low-priority mirrored ZFS pool as a backup of my main TrueNAS server, rather than anything critical. If you need something more reliable, the Broadcom HBA equivalents (LSI 9400-8i or LSI 9500-16i) are rock solid, but you’ll pay roughly x5-10 times the price and they idle at a noticeably higher wattage. For what I need, the ASM1166 hits the sweet spot.

All of these cards can neatly slot into the M720q’s PCIe riser, keeping the overall power draw of each node impressively low for what they’re capable of.

Finally, each node requires an additional ethernet port, so that our cluster’s Corosync traffic can be physically separated to reduce latency. Realtek’s RTL8125B 2.5GbE NICs are the best bang for the buck and come in PCIe format as well as in a handy M.2 A+E interface. I don’t, after all, require the WiFi adapter.

My Personal Shopping List

  • 3 x Lenovo ThinkCentre M720q Tiny: eBay £70-90 each. NVME/SSD drives already owned in costings.
    • Proxmox Server #1: Intel i3-8100T, 16GB RAM, Intel Optane 16Gb Boot, 256GB NVMe +10GbE NIC.
    • Proxmox Server #2: Intel i5-8400T, 32GB RAM, Intel Optane 16Gb Boot, 1TB NVMe + 2.5GbE NIC.
    • Proxmox Server #3: Intel i5-8400T, 16GB RAM, 256GB NVMe + SATA Controller + 2.5GbE NIC.
  • 1 x Intel X710-DA2 10GbE NIC: Dell branded 5N7Y5 are often cheaper. eBay £45.
  • 1 x M.2 A+E KEY 1×2.5GbE Realtek RTL8125B NIC: 2.5GbE NIC for storage node #3. AliExpress £10.
  • 1 x PCIe 1×2.5GbE Realtek RTL8125B NIC: 2.5GbE NIC for node #2. AliExpress £5.
  • 1 x ASM1166 6-Port SATA Controller: Make sure it uses ASM1166. AliExpress £16.
  • 1 x 1Gb SFP+ to Base-T RJ45 Module: Low power. AliExpress £4.
  • 3 x Lenovo Riser Adapter x16 Part 01AJ940: Riser adapter, bespoke to Lenovo. AliExpress £4 each.
  • 2 x M.2 A+E to M key adapter: To connect Intel Optane to Wifi slot. AliExpress £2 each.
  • 2 x 64GB USB drive: for Proxmox Backups. Amazon £5 each.
  • 1 x Hisource Unmanaged Switch: (4x 2.5G RJ45 + 2x 10G SFP+ ports). AliExpress £15.
  • 1 x SFP+ DAC Cable: To connect the ProxMox #1 to the switch. AliExpress £5.
  • 4 x CAT6 Cables: To connect ProxMox #2 and 3 to the switch. AliExpress £1 each.

Grand Total: £320-340

What’s Next?

Now that we’ve covered the why and the what, it’s time to get our hands dirty. In Part 2, we’ll walk through assembling the hardware, fitting the PCIe riser, slotting in the expansion cards, and getting each node ready for deployment.

A Beginner’s Proxmox Cluster: From Single Node to HA

Part 2: Building and Installing Proxmox on a Lenovo M720q

Leave a Reply

Your email address will not be published. Required fields are marked *