As with everything this year, my cloud costs have been ever increasing to the point where building a locally hosted instance was becoming more attractive. I have been renting a dedicated Hetzner server for over six years at this point, which runs all my containerized self-hosted apps, stores personal data and backups, and is my platform for learning new deployment strategies with Docker/Kubernetes.
Parts
As this is a locally hosted server, we now have to take into account the fun “power bill”: ideally, we want one node to fulfill every task instead of having separate nodes for compute, storage, etc. to save on energy usage from each individual node having it’s own power supply and CPU.
The build has to fulfill the following requirements:
- Rack mountable chassis, redundant power supplies, ATX motherboard compatibility, maximum 4U
- All data drives should be hot-swapable to minimize downtime, no disk-shelf style or other internal layouts
- Intel Quick Sync for live media transcoding
- Somewhat affordable, we don’t want to get the best in all areas but a well-rounded machine that doesn’t cost a fortune; using a combination of used eBay parts and new internals we can get the best value
Component | Part | Source | Price |
---|---|---|---|
Chassis | Supermicro 847BE1C-R1K28LPB | Used/eBay | $1,050 |
Rail Kit | Supermicro MCP-290-00057-0N | Used/eBay | $195 |
Motherboard | ASUS Prime Z790-P WiFi | New/Amazon | $375 |
CPU | Intel i5-13500 | New/Newegg | $375 |
CPU Cooler | Coolerguys 2U Active Vapor Chamber LGA1700 200W CPU Cooler | New/Amazon | $100 |
Memory | 2x Kingston Fury Beast Black 64GB 5200MT/s DDR5 | New/Amazon | $263 |
SSD (Boot) | SABRENT 2TB Rocket 4 Plus | New/Amazon | $232 |
NIC | Asus XG-C100C | New/Amazon | $148 |
HBA | 2x Supermicro LSI 9300-8i | Used/eBay | $180 |
HBA Fan | 2x Noctua NF-A4x10 PWM | New/Amazon | $42 |
SAS Cables | 4x 10Gtek Internal Mini SAS HD SFF-8643, 0.8-Meter(2.6ft) | New/Amazon | $39 |
SAS Cables | 10Gtek Internal Mini SAS HD SFF-8643, 0.5-Meter(1.6ft) | New/Amazon | $34 |
Fan Controller | Noctua NA-FH1 | New/Amazon | $54 |
Fan Controller Power Cable | StarTech.com 12in LP4 to 2X SATA Power Y Cable Adapter | New/Amazon | $10 |
Front Panel to ATX Cable | Supermicro CBL-0084L 15cm 16-Pin Front Control Panel Split / Extension Cable | Used/eBay | $30 |
Total | $3,127 |
Chassis and Rails
The Supermicro 847BE1C-R1K28LPB checks all required boxes, it has 36 hot-swap drive bays with dual redundant power supplies, supports ATX motherboards, and has plenty of space inside for additional parts.
While the chassis is quite expensive, it comes with two platinum rated 1000W PSUs @ 120V, seven high performance chassis fans (36 drives at full power will be about 300W of heat!), and two SAS3/SATA3 backplanes. Unfortunately this listing did not come with rack rails which had to be purchased separately.

Motherboard
This was a tough choice – since this machine will be a “cloud server” it would have been nice to get some remote OOB management like IPMI. At the time, enterprise boards running the z790 or w790 chipset with IPMI were over $1000 which is a very big premium for IPMI and some ECC support. I opted to go for a z790 motherboard with as many PCIe slots as possible, as I will be adding HBAs and 10Gb network cards.
I used a Supermicro CBL-0084L 15cm 16-Pin Front Control Panel Split / Extension Cable to interface between the proprietary Supermicro front control panel and standard ATX power controls.
CPU
While I could have gone for a more powerful chip, the Intel i5-13500 is a great mid-range offering with the same UHD Graphics 770 iGPU available in other higher end chips. With a default TPU of 60W and idle around 5-10W, it can be overclocked to over 150W when needed, putting it inline with an i7-12700K which can use over 250W.
The CPU cooler has to be a “pull-through” design that cools using fresh air from the front of the chassis and pushes hot air out. The Coolerguys 2U cooler is an interesting product that fits these requirements perfectly. Running a synthetic benchmark with the 13500 overclocked to 150W, temperatures never get above 70C.
Memory
The memory speed does not really matter, as I will be upgrading to four sticks in the future which is most likely limited to 4800MHz anyways. Two sticks of Kingston Fury Beast 64GB were chosen as they were the cheapest DDR5 at the time.
Network Card
Initially I was planning on going straight to 40G networking with some Mellanox ConnectX-4 QSFP+ cards, as the storage array shown later can push around 15Gbps. Unfortunately, the whole system and switch were quite loud and hot with older 40G networking equipment.
I backtracked a bit and went with the trusty Asus XG-C100C for NBASE-T networking that supports every Ethernet speed. The card connects to my 10Gb switch at the fiber drop through CAT5e in the walls (Cat5e can unofficially support 10Gb, depending on distance).
Host Bus Adapters
Unfortunately the Supermicro system only supports half-height PCIe cards, so I cannot choose the LSI SAS 9300-16I. A half-height card like the LSI SAS 9400-16i with four ports would have been a better choice than two separate cards, but at the time they ran $300-$500 and two 2-port cards were $90.
I connected the 24-bay front backplane to the first PCIe slot HBA as this slot has full 16x lanes, and the 12-bay rear backplane to the HBA in the next 4x slot. This allows the 24 front bays to run at 8GB/s (8 lanes @ 1GB/s); with each disk at 333MB/s. The rear backplane runs 12 disks at 4GB/s (4 lanes at 1GB/s) which gives us 300MB/s. This configuration gives us plenty of bandwidth with headroom using spinning HDDs, as Seagate EXOS drives can only sustain 270MB/s each.
Since these HBAs were designed to run in a system with lots of airflow, some active cooling must be added as there is a default, lower RPM fan curve. For this, I zip-tied Noctua NF-A4x10 PWM to the heatsinks:

Fan Controller
As our motherboard does not have enough fan headers or power to run all seven internal enterprise fans, we need to use a separate fan controller. The Noctua NA-FH1 is a great option as it’s SATA powered for 54W of output. The PWM signal is ran off one fan header on the motherboard. We can tune the fan curve in BIOS, running them at 10% and scaling based on CPU temperature to 60% @ 80C keeps everything cool and quiet.
Putting it All Together
ZFS Setup
For the main storage system I went with a ZFS 10-wide RAIDz2 array which gives us up to 478 TiB usable space with 22TB disks. There is also an option to go 12-wide as there are 36 bays in the system, however it’s less expensive to by 10 disks vs 12 disks at a time when it comes to upgrading vdevs. For this pool I used old disks I already had laying around, 10x 8TB and 10x 3TB.
Creating the Pool
zpool create -o ashift=12 tank raidz2 \
/dev/disk/by-id/... \
/dev/disk/by-id/... \
raidz2 \
/dev/disk/by-id/... \
/dev/disk/by-id/...
BashI also set the following settings:
zfs set recordsize=1M tank
zfs set atime=off tank
zfs set compression=lz4 tank
BashI use recordsize=1M
since most files on the array will be larger than 1MB, and there will be no small IO. Setting atime=off
improves read performance and compression=lz4
uses opportunistic compression to save space and reduce latency when possible.
ZFS Read Performance
fio --name=fio --direct=1 --bs=1M --size=64G --rw=read --numjobs=1 --iodepth=32 --runtime=60 --time_based /tank
fio: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=1972MiB/s][r=1972 IOPS][eta 00m:00s]
fio: (groupid=0, jobs=1): err= 0: pid=310833: Sat Jul 15 22:05:25 2023
read: IOPS=1788, BW=1789MiB/s (1876MB/s)(105GiB/60065msec)
clat (usec): min=90, max=265686, avg=558.18, stdev=2652.93
lat (usec): min=90, max=265687, avg=558.23, stdev=2652.94
clat percentiles (usec):
| 1.00th=[ 182], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217],
| 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 253],
| 70.00th=[ 281], 80.00th=[ 347], 90.00th=[ 562], 95.00th=[ 840],
| 99.00th=[ 6521], 99.50th=[16057], 99.90th=[38536], 99.95th=[50594],
| 99.99th=[72877]
bw ( MiB/s): min= 74, max= 2474, per=100.00%, avg=1790.87, stdev=339.95, samples=120
iops : min= 74, max= 2474, avg=1790.87, stdev=339.95, samples=120
lat (usec) : 100=0.01%, 250=58.59%, 500=29.89%, 750=4.89%, 1000=3.22%
lat (msec) : 2=0.86%, 4=0.94%, 10=0.92%, 20=0.27%, 50=0.35%
lat (msec) : 100=0.05%, 250=0.01%, 500=0.01%
cpu : usr=0.16%, sys=51.18%, ctx=6547, majf=0, minf=269
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=107453,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=1789MiB/s (1876MB/s), 1789MiB/s-1789MiB/s (1876MB/s-1876MB/s), io=105GiB (113GB), run=60065-60065msec
BashZFS Write Performance
fio --name=fio --direct=1 --bs=1M --size=64G --rw=write --numjobs=1 --runtime=60 --time_based /tank
fio: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1988MiB/s][w=1988 IOPS][eta 00m:00s]
fio: (groupid=0, jobs=1): err= 0: pid=170665: Sat Jul 15 22:03:32 2023
write: IOPS=1790, BW=1791MiB/s (1878MB/s)(105GiB/60001msec); 0 zone resets
clat (usec): min=40, max=570845, avg=518.44, stdev=4286.60
lat (usec): min=45, max=570856, avg=556.28, stdev=4287.68
clat percentiles (usec):
| 1.00th=[ 66], 5.00th=[ 73], 10.00th=[ 206], 20.00th=[ 297],
| 30.00th=[ 343], 40.00th=[ 371], 50.00th=[ 396], 60.00th=[ 429],
| 70.00th=[ 465], 80.00th=[ 523], 90.00th=[ 676], 95.00th=[ 807],
| 99.00th=[ 1172], 99.50th=[ 2180], 99.90th=[ 5997], 99.95th=[ 45876],
| 99.99th=[252707]
bw ( MiB/s): min= 546, max= 3762, per=99.84%, avg=1787.99, stdev=486.80, samples=119
iops : min= 546, max= 3762, avg=1787.97, stdev=486.82, samples=119
lat (usec) : 50=0.06%, 100=6.55%, 250=7.12%, 500=62.90%, 750=16.61%
lat (usec) : 1000=4.74%
lat (msec) : 2=1.49%, 4=0.41%, 10=0.02%, 20=0.01%, 50=0.04%
lat (msec) : 100=0.02%, 250=0.02%, 500=0.01%, 750=0.01%
cpu : usr=7.09%, sys=36.02%, ctx=104476, majf=0, minf=14
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,107457,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=1791MiB/s (1878MB/s), 1791MiB/s-1791MiB/s (1878MB/s-1878MB/s), io=105GiB (113GB), run=60001-60001msec
BashThe pool performance is great for my needs, with 1876MB/s reads and 1878MB/s writes. The test file size was set to double ZFS ARC cache max size to reduce the chances that file parts are already in cache.
Networking
I have been waiting for fiber availability in my building since I moved in, and fortunately it became available just as I was finishing this build. I went with the fastest plan since I got a great deal – and soon found out that even some speed test servers cannot handle 3Gbps! Finding a server fast enough and setting it with --server-id
we can see some great results:
speedtest --server-id=3049
Speedtest by Ookla
Server: TELUS - Vancouver, BC (id: 3049)
ISP: TELUS DSL Internet
Idle Latency: 0.55 ms (jitter: 0.08ms, low: 0.45ms, high: 0.68ms)
Download: 3096.07 Mbps (data used: 1.5 GB)
5.65 ms (jitter: 0.27ms, low: 0.59ms, high: 8.15ms)
Upload: 3066.78 Mbps (data used: 1.4 GB)
11.12 ms (jitter: 32.74ms, low: 0.54ms, high: 214.41ms)
Bash
Extras
Every server needs a home, so I picked up a StarTech RK1236BKF server cabinet for cheap on my local marketplace. Along with a pack of ARCTIC F14 PWM PST fans mounted to the rear, airflow and internal temperatures are quite good. The CyberPower CP1500PFCRM2U UPS ensures that ZFS has enough time to shutdown safely in case of a power outage.

Component | Part | Source | Price |
---|---|---|---|
Server Cabinet | StarTech RK1236BKF | Used/Local | $300 |
Server Cabinet Fans | 5x ARCTIC F14 PWM PST | New/Amazon | $73 |
UPS | CyberPower CP1500PFCRM2U | New/Amazon | $419 |
Total | $792 |
Leave a Reply