Building a Home Lab Server in 2022
April 04, 2022 (last update: November 22, 2022)


Until now I used various embedded and consumer computers to run both short and long term tasks, like PC Engines, Raspberry Pi, NUC and similar small desktops. They are fine (and very silent) but it is obviously limiting what can be done and there is always a need to re-install OS etc. Because these computers are usually not powerful enough and have no remote management capability, running a virtualization solution on them does not make much sense for me. So to overcome these issues, I finally decided to build a reasonably powerful proper server to use at home.

A powerful consumer computer (such as 8-16 cores, 64GB+ RAM) is a solution, but this usually does not have remote management capability, have limited number of PCIe lanes, lack ECC support and in general not optimized for stability. It might still be OK, but I decided to take the enterprise route.

So my purpose is to have a reasonably powerful “proper” virtualization server.

It is common to purchase servers from well-known vendors like HP and Dell. The remote management capability is usually developed in-house by these manufacturers and they optimize the servers according to particular enteprise needs and for datacenters. However, this is not optimum in my case, because I will run it inside the home, so it has to be maximally silent which is not a concern for a datacenter. Also, not surprisingly, these servers are pretty expensive if you go for high performance options.


Initially I set a budget up to 2500 USD for an entry level simple server with consumer grade components, but then increased this to 4500 USD to have a proper server not just a powerful consumer computer.

It is not very straight-forward to compare this to an HP or Dell server since it is difficult to make the same configuration. Usually such servers are either less powerful (less cores, no NVMe) for 1 CPU systems or they are too powerful (>16 cores and 2 CPU), and most configurations are only available as rack not tower. Not the CPU but I think the RAM and the storage prices are much higher in ready to buy servers, so a similar configuration would proably be more than 6K USD.

If consumer components (Ryzen 9 5950X, X570 mainboard without remote management, no ECC memory) are used, the cost can be decreased approx. 1500 USD to less than 3000 USD.

Bill of Materials

ComponentModel~Price (USD)
MainboardSupermicro H12SSL-NT700
RAM8x Micron DDR4 RDIMM 16GB950
Storage2x Samsung PM9A3 1920GB760
CaseCorsair 275R Airflow85
Power SupplySeasonic Prime TX-1000340
CPU FanNoctua NH-U14S110
Front FanNoctua NF-A1470
Top&Rear Fan2x Noctua NF-S12A40

Total: ~4100 USD

*: Remote management controller has a VGA controller, I am not using any other GPU at the moment.

**: I am using the onboard 10G LAN ports.



A very high-end desktop CPU (12-16 cores), Intel i7, i9 or AMD Ryzen, is actually enough in terms of computational power. However, these usually do not support ECC (or you have to very carefully match a particular CPU with a particular mainboard etc.) and their mainboards do not have remote management options. For this build, remote management is a must, ECC is probably a must.

EPYC 7313P

EPYC 7313P

There are so many Xeon and EPYC products, and it is hard to choose. I selected AMD EPYC 7313P, because:

  • It is available. Many of these server CPUs are not in stock so actually impossible to purchase.
  • A mainboard with the features I am looking for and supporting this CPU is available.
  • It has 16 cores. I was actually planning to build a server with 8 or 12 cores (and having a desktop CPU), but then decided to build one that would last much longer and consolidate probably everything I would ever need for home use and for personal projects.
  • It has many PCIe lanes.
  • In case I need more than 16 cores in the feature, there are up to 64 cores EPYC 7003 series CPUs available (but naturally quite expensive at the moment).

Mainboard: Supermicro H12SSL-NT

One problem with EPYC is it seems there are only a few mainboard options comparing to Xeon, particularly for EPYC 7003 series. Two product lines are ASRock ROMED8 and Supermicro H12SSL. They are very similar and each has a few variants. Because of availability and I do not plan to use an onboard storage controller (for SAS and RAID), I purchased a Supermicro H12SSL-NT. It has 8x DIMM slots, 2x M.2 PCIe 4.0 x4 slots, and many PCIe slots and remote management, so everything I need. I already have and plan to install 10Gb network adapters, so H12SSL-i might have worked as well, but it was not available to purchase.

SP3 Socket on H12SSL

SP3 Socket on H12SSL

I did not list it in the Bill of Materials, but I also installed a TPM 2.0 module (Supermicro AOM-TPM-9665V).

RAM: 8x Micron DDR4 RDIMM 16GB MTA9ASF2G72PZ-3G2B1

Quite an expensive item in the build because there are 8 of them. This is one of the parts recommended by Micron to be used on H12SSL-NT. EPYC is quite unique that it has 8 (!?) memory controllers, so you get the best out of the CPU when you populate all 8 DIMMs. Initially, I considered purchasing 8x 8GB for a total of 64GB, then decided to get 4x 16GB and add 4 more in the future when needed. Then I decided to install all 8 now.

Storage: 2x Samsung PM9A3 1920GB

I first purchased a good consumer unit, Samsung 980 Pro. Then, I realized this one (PM9A3), an enterprise unit, is priced almost the same. It is a bit slower than 980 Pro, but has at least 3x more endurance.

Supermicro lists only Samsung PM983 in the compatibility list, but it seems PM9A3 works fine as well.

I actually purchased only one first, as I was not sure what kind of main storage I want to use (I was considering iSCSI). Then I decided to use Proxmox VE, and use ZFS, so I purchased another one to create a ZFS mirror pool.

PM9A3 comes without a heatsink. I do not know if there is any need. I purchased an ICY BOX IB-M2HS-1001 heatsink and using them.


I actually initally installed a GPU but then decided to keep the GPU on my PC, so I did not install any other video card at the moment.


The mainboard has 2x 10G network ports provided by a Broadcom chip. I was initially planning to use another network adapter, but then decided to use only the onboard one.

Case: Corsair 275R Airflow

I do not care much about the case. I do not need any 5.25" or 3.5" support, and I only care if it has a reasonably open design. It seemed this model fits my requirements, and it is not expensive. It supports 165mm CPU cooler height, 140mm fan at front and 120mm fan at the back. It also supports 140mm fans at top but over the RAM slots only 120mm can be used with the RAM modules I am using. It is a bit large, but it is the minimum size I could find supporting these cooling options. It has a glass side panel, unnecessary but a nice touch. The case also has a magnetic dust filter on top and at front, which is a nice idea.

Power Supply: Seasonic Prime TX-1000

I do not know much about the power supply brands, but it seems Seasonic is a well-known brand. This particular power supply is also a bit expensive comparing to alternatives. 1000W might be a bit too much but running power supply with a minimum possible load is a good thing. I calculated the maximum power the server may consume (using a reasonable GPU) is a little over 500W.

This power supply has a hybrid mode, meaning it can fully stop its fan when possible when it is mounted ventilation side on top.

Fans: Noctua

Corsair case comes with 3 non-PWM 120mm fans. As I want the computer to be silent as possible, I removed these, and installed a Noctua 140mm at the front (I already had this, no particular reason I use 140mm), and a Noctua 120mm at the rear and also on top. For the CPU, I installed a Noctua NH-U14S.

NH-U14S is supported by the mainboard, but only in horizontal (fans blowing to mainboard top) orientation.


I acquired most of the parts pretty quickly. The mainboard, M.2 storage and RAMs were not in stock, but they arrived in less than a week. Because I purchased the second M.2 and the second 4x set of RAMs later, it took more than two weeks to finalize the build.

During installation, I removed the 3.5" cage and the front panel audio cable from the case as I will not use these. Other than the issue with one of the screw locations in the mainboard, the installation was straight-forward.

Home Lab Server 2022 with a GPU and 2x NIC installed. However, I decided to remove the GPU and use the onboard LANs only.

Home Lab Server 2022 with a GPU and 2x NIC installed. However, I decided to remove the GPU and use the onboard LANs only.

Issues and Concerns

  • The mainboard has one screw location at a non-ATX standard complaint location so I had to take the stand off from the case and left this location empty. That is strange.

  • The heatsink of I think 10G network IC on the mainboard concerns me a little. It is next to PCI slots at the back. It is not (space) limiting anything but having a heat source there does not make me happy. This might be a reason to prefer H12SSL-i rather than -NT (or -C instead of -CT). Because it does not have 10G ports, H12SSL-i (and -C) has no heatsink at that location.

  • The first PCIe slot (PCIe Slot 1 x16) in H12SSL is very close to the (front) USB header connector. I do not think it can be used safely with a PCIe x16 card having heatsink.

  • The design of the case makes it a bit harder to unplug network cable from the management/IPMI port of the mainboard. As this will not be done often, not an important issue.

  • Supermicro does not provide a customizable fan control. I hope that is not going to be too big issue by using quiet fans, otherwise I need to find some workarounds.

Recommendations and Alternatives

  • You might want to consider H12SSL-i if you do not need 10G support on the mainboard. If you want to use ESXi and need RAID support for local storage, H12SSL-C or (CT with 10G ports) might be a better choice.

  • It was difficult for me to find ASRock mainboards, that is why I did not consider them.

  • When you need more cores, EPYC 7443P (24-core) is priced OK at the moment (~1.5x of 7313P), whereas 7543P (32-core) is too expensive (almost 3x of 7313P).

Performance Tests

After the build is completed, I run a few performance tests before installing the virtualization solution.

The PassMark score is as expected:

PassMark Score

I also run STREAM benchmark and the best result I could get is around 110 GB/s. This was without any particular optimization so I think this result can be improved.

Supermicro BMC Fan Control

Supermicro fan control is very basic, there are only 4 fan modes (optimal, standard, full, heavy I/O) to choose. I use optimal speed (which I think starts the fans from 30% speed) but it is important to set the low and high ranges of the fans, this can be done by ipmitool.

Virtualization: Proxmox VE

I have been using ESXi for 10+ years, and actually this was what I had in my mind when I started building this server. Just for curiosity, I searched for alternatives. I know about Hyper-V but I would prefer using ESXi. Then, I saw Proxmox VE and decided to give it a try. Being a Linux user for many years, I really like it, more than ESXi. It is simpler to use and understand than ESXi, and it supports ZFS so I can run it purely on local storage (2x M.2) without hardware support. It also supports installing containers in addition to virtual machines. So I decided to keep it for the virtualization solution.

Proxmox VE Summary

Proxmox VE Summary

NUMA Topology

Because EPYC is a multi-chip module (in the sense there are actually 4 chips plus an I/O block interconnected on a single IC), the L3 cache and I/O resources are distributed in a certain way among these chips. It is possible to make a single socket EPYC system, like the one built here, declare itself as having multi NUMA nodes. This is done by NUMA Nodes per Socket (NPS) and ACPI SRAT L3 Cache as NUMA Domain BIOS settings. It is a big topic to discuss which is better, which is better for Proxmox or KVM, and does it really make any difference for your setup, but AMD recommends (source: AMD Tuning Guide AMD EPYC 7003: Workload), for KVM, to set NPS=4 and enable ACPI SRAT L3 Cache as NUMA Domain, so I keep it like that at the moment. I recommend to check out various AMD documents if you need more information.


I write this conclusion after 2 weeks or so. I am pretty happy with the setup until now. It is not a surprise but the capacity and the capability of the system is very good. I think it will take a very long time until I need another server at home.

Virtualizing pfSense and a Public Speedtest Server

(Update on 2022-11-21)

I decided to virtualize my physical pfSense firewall and consolidate it to this server. I installed an Intel X710-BM2 and an Intel X540-T2 network adapter and assigned (PCIe passthrough) them to pfSense. I also installed a public Speedtest server. Because of the NICs, I decided to add another 140mm fan to the system and connected it to FANB header that is controlled by the system temperature not cpu.

I installed the NICs according to NUMA topology, to PCIe slot 3 and slot 5, they are both on NUMA Node 0. I also pinned the pfSense VM accordingly to the corresponding cores in NUMA Node 0.

Current Setup with 2 NICs

Current Setup with 2 NICs

Energy Consumption

(Update on 2022-11-22)

You might be wondering the energy consumption of this server particularly with the increasing energy prices in Europe.

I did a primitive test just to give an idea. I have the server as described above and with 2 NICs, Intel X710 and Intel X540. Previously in another post, I saw the CPU was reporting minimum around 55W, and considering the mainboard, memory, and NICs, with almost no load (with some VMs running but no load, all below < 5%), I see around 110Ws consumption. I read the energy use from myStrom smart plug (average energy consumed per second, Ws field in report API call).

In order to test the maximum consumption, I created a VM with 32 cores and 120GB memory, then I run stress -c 32 -m 32 --vm-bytes 2048M. This increased the consumption (and stayed there consistently) to 226Ws. Default TDP of EPYC 7313P is 155W, so I think this is a reasonable result. Basically the CPU runs roughly between 50W and 150W. At full load, CPU temperature also reached to 60C before fan speed slightly increased (I stopped the test there, not sure at what temperature fan control tries to keep it).

With the increased electricity prices in Switzerland, starting from 2023, and I hope I calculate this correct, this means it will cost approx. 4 CHF per week (1 CHF is approx. 1 EUR), and approx. two times more at full load. That means it will cost at least 200 CHF per year.