Home server suggestions/question!

Associate
Joined
26 May 2008
Posts
227
I am interested in building/buying a server (I think), depending on feedback.

A bit of context first, I have recently installed Ethernet cabling around my 1900 circa house, like an exoskeleton really. All the wires come up the outside of the house into the loft and down into an internal cupboard. I have a patch box, connecting to a Netgear switch. I have an Orbi mesh system (also used as router although thinking about Pfsense/opensense) with wired backhaul, various game devices, nvidia shield, tvs, PCs (linux and windows 10), Android mobiles etc. I run a Synology 918+ NAS as backup, plex server and I have been experimenting with Nextcloud over the weekend.

Generally I have been on a privacy kick lately getting off the larger corps, although after taking a bit more notice, I note Synology also has trackers on their mobiles apps, and they seem to be looking into collecting more data via the NAS. So I was wondering about running Nextcloud, with end to end encryption, instead of using the Synology propriety software. I then began to think about whether the NAS has enough power to what I want it to do and I started to think about building a headless dedicated machine.

Previously I did run a i7 nuc headless with windows 10, as a Roon machine, but it was a bit of faf to maintain although that is probably my novice abilities!

So I was thinking of possible stuff to run
  • NextCloud
  • Plex media server - replacing NAS
  • Pihole - replacing pi3b??
  • Opensense - Maybe, not sure on this...
  • Roon server (on VM?)
Anyway, does this sound the sort of scenario where a dedicated machine/server might work (I can send the NAS back to backup duties exclusively)? If so, what sort of machine should I be looking at? And, what software should I run on it?

The internal cupboard is not exactly ideal as a location though, its not used as a cupboard as its the back of a shower!! I might need to find a new location. No-where obvious comes to mind in my house, perhaps loft (hot in summer, cold in winter), or under the stairs (20-22c most of the year around, but not much ventilation). So heat might be an issue and needs to be considered.

Happy for any thoughts or suggestions. Thank you!
 
Soldato
Joined
29 Dec 2002
Posts
7,252
Yes. How fast is your connection, will you be transcoding and do you have PlexPass, what’s the budget?

One thing that jumps out as needing further consideration here is virtualising your firewall. Firstly you will need to be passing the NIC ports direct to the VM, so need a host capable of this, virtualising your security has security considerations in itself, but the more obvious practical issue is much simpler:

When you break, upgrade, reboot, add drives using a non hot plug bay or something goes wrong, you loose the network while the server is down, as does everything else. This needs genuine consideration and likely a fallback plan as when (not if) something goes wrong and you need to download an ISO etc. it’s annoying to tether.

Another is while Plex can run on a wide range of hardware, if you need to transcode for any reason, HW transcoding is much, much more efficient way of transcoding, but you need PlexPass for this. If you haven’t got PlexPass, JellyFin may work better for you (free HW transcoding, limited app support). When it comes to HW transcoding, Intel is King due to the iGPU’s capabilities and quality compared to even current gen NVEnc. If you want to go this route, 6th gen onwards CPU+iGPU. If you prefer AMD then combining it with a P400 is about the best value/quality option at present, an M2000 gets an honourable mention if you can find one cheap.

Personally, I like UnRAID for this kind of usage, you can direct pass hardware to VM etc. it allows you to use mixed drives, so you buy whatever is the best £/TB at the time, has reasonable VM/docker support and is heading in a positive direction development wise. It does have a cost, but it’s pretty low and arguably the best value software I think I have ever purchased in the last 30 years. That’s not to say TruNAS or OMV don’t deserve careful consideration, but they have implications on expanding storage, but will offer you higher IO performance, if you need that. All support docker/jails and have things like Nextcloud and the other things you mention available as simple clock installers.
 
Last edited:
Soldato
Joined
18 Oct 2002
Posts
3,512
Location
UK
I was going to write a long reply but it's just easier to say I agree entirely with everything Avalon says. Unraid has really been the best thing I invested in. The family doesn't appreciate quite how much it does for us now from media distribution, auto photo backups from their phones, faster browsing with ads blocked network wide, centralised private password management, smart home automation they all rely on now etc. I'm sure Synology & Qnap could do the same but the extensibility, both hardware and software, seems almost limitless sometimes. I think I've been running it for three years now and I still find new and really useful things come along quite often.

As a die hard long time Windows user with a mediocre level of technical hardware/networking knowledge (and nothing previously with Linux), I found Unraid and its WebUI really easy to use. The only thing I don't run off it is my *sense router for the reasons that @Avalon advises.
 
Associate
OP
Joined
26 May 2008
Posts
227
Thank you for the replies!

Yes. How fast is your connection, will you be transcoding and do you have PlexPass, what’s the budget?

My line is 80 meg (fibre to the cabinet, copper to the home), but I am out in the country, so not likely to get any faster!

I do have a PlexPass account. And probably a yes to transcoding.

Budget, nothing set so far, but I don't mind spending on it as it looks like it will become quite central. Say, £500 - £1000.

One thing that jumps out as needing further consideration here is virtualising your firewall. Firstly you will need to be passing the NIC ports direct to the VM, so need a host capable of this, virtualising your security has security considerations in itself, but the more obvious practical issue is much simpler:

I was wondering about a separate router anyway running PFsense or opensense. Right now its a orbi router. Sounds like I should just keep the router separate for the reasons you describe.

As a die hard long time Windows user with a mediocre level of technical hardware/networking knowledge (and nothing previously with Linux), I found Unraid and its WebUI really easy to use. The only thing I don't run off it is my *sense router for the reasons that @Avalon advises.

Thank you that is good to know! I am somewhat mediocre too!!

Unraid, looks very interesting, I will go and have a read.

Any thoughts on hardware?
 
Soldato
Joined
18 Oct 2002
Posts
3,512
Location
UK
Any thoughts on hardware?

Well I think you'll get different opinions but happy to give you mine for consideration. I'd definitely go for a modern intel based solution for its iGPU which is decent enough for small amounts of transcoding combined with decent low power consumption in most situations (including when idle). Then it sort of comes down to preference really. Personally with up to £1,000 I'd shuck two 16TB external drives (probably Seagate because you get Exos enterprise drives inside them), base it on a mITX platform, get an efficient gold rated PSU of not massive wattage (not needed), 16Gb RAM, a 500Gb SSD and mount it in a small Fractal Node case. That'll only give you 16TB free space (assuming you will use parity) but it's a platform upon which you can expand if you think this is a long term thing for you. Motherboard will probably come with 4x SATA ports so you can add in another hard drive with no fuss and when you need more again you can add an HBA card and the case will take lots more drives.

It may seem wasteful of having 16TB of unusable storage but it means long term you'll be able to mix and match adding pretty much any drive you like in the future as needs require without the faff of swapping parity drives around. Of course you may not be comfortable shucking drives and the warranty implications nor maybe need that much space so pick drives to suit. You can achieve the same usable space for the same budget probably with 3x 8TB off-the-shelf internal HDDs as well for example.

Whatever you do, don't fall for buying some ancient quad Xeon ex-corporate server off eBay for 50p because it has 64Gb of RAM and a gazillion headline Ghz of processing power. You don't need it, it won't be that much faster than a modern solution anyway in this use case and it'll howl like the hounds of hell while running your electricity bill through the roof - false economy.
 
Soldato
Joined
29 Dec 2002
Posts
7,252
@BigT pretty much nailed it, but I wouldn’t suggest going ITX, it’s a very restrictive form factor for this sort of build. 2 RAM slots, 4-6 SATA, one PCIe, not fun if you want to add an additional NVMe and need to go PCIe slot, or wanted to add an HBA and then a multi port NIC to virtualise a firewall, or a tuner, or go 10Gb.

Here are power numbers I pulled a few months back post move while sorting out what I was going to run/rack. All systems used the following unless stated otherwise:

2x16GB DDR4
Dell H310 PCIe HBA
Dell H200e PCIe HBA
Samsung 2TB 970 Evo+
TDK USB flash drive (UnRAID boot)
Quadro M2000 PCIe GPU

System 1 (68w): i5 8400, ASUS B360 ITX, only H200e as single PCIe and using iGPU.

System 2 (57w): Ryzen 3600, ASUS C6H, knock 7w off for single HBA.

System 3 (72w): i7 6800K, MSI x99a SLI Plus - add 5w for 96GB ECC.

I really expected Intel to win here and it be noticeable. I was wrong. Given all transcodes would be in hardware, and NVEnc is a separate chip on the NVidia GPU so it’s not really under significant load, it’s not going to ramp up much when transcoding, but the 3600 packs a bigger punch than 8400 in terms of processing power.

I would consider a used 4-8c Ryzen as decent option, the x370/B350 boards - while only supporting upto 3xxx chips - are reasonable, I grabbed an ASROCK for £50 the other day, similar story with x470/B450 boards, used 1600/1700/2600/2700/3600/3700 etc. are likely overkill, but see what you can get for your money. GPU wise, a Quadro P400 is inexpensive, will do a few HW transcodes easily and is reasonably modern. Failing that any Intel CPU with iGPU (aim for Skylake or newer ideally), a HD630 iGPU is capable of circa 30 1080 transcodes, used I personally like the 8th gen i3 8100, 4c (like the older i5’s) and very capable for what it is.

Drive wise supply is slightly problematic at present, 12-14TB tends to be the best £/TB ratio, 14TB-16TB Seagate is still EXOS territory, 12TB you would need to check for current info, but shucking is a lot cheaper than buying retail. WD has had issues with selling shingles drives (SMR) which really isn’t the ideal for some usage scenarios, but for UnRAID pool (not parity) it’s not likely to be an issue, specially if you use a cache drive (saves spinning up the storage pool all the time, use an SSD or NVMe for the dockers/VM’s).
 
Soldato
Joined
18 Oct 2002
Posts
3,512
Location
UK
I’ll need to rethink on my Ryzen opinions in this scenario. Good to see some real world comparisons and impressive sipping of power.
 
Soldato
Joined
29 Dec 2002
Posts
7,252
The only weak point on Ryzen for this kind of usage is RAM. You only get 4 slots and a claimed memory limit of 64GB over 4 slots. Then you have ECC, all Ryzen CPU’s and the Pro APU’s support ECC (non pro don’t), out of 5 mainstream retail OEM’s Gigabyte supported booting ECC, but didn’t enable the ECC functionality, ASUS enabled it, but it wasn’t perfect and varied by board, ASRock/ASRack supported it properly. Compare that to x99 with 8 slots and 256GB ECC support with a cheap/easy path to 128GB and if you are predominantly RAM constrained, x99 is still a valid option.

Power wise, the really cool option is a 3400g, people have had them down to 6.8w idle under Linux, they can go down below 5w with a little effort in A300’s, that’s a CPU capable of over 9.3K of CPU Mark, sadly later versions are more difficult to come by at present, but hopefully AMD will revisit that choice.
 
Soldato
Joined
17 Jan 2005
Posts
8,555
Location
Liverpool
Unraid has really been the best thing I invested in.

Definitely! I made the switch over to Unraid at the start of the year and it's been fantastic. I was previously using an old i7-3770 running ESXi but it didn't support hardware transcoding and enlarging the array was a faff so I was constantly out of space.

I bit the bullet and bought a i5-7600, 16GB RAM, an Asus board that supports NVMe and has 6 SATA ports and set up Unraid. The parts were all second hand and reasonably cheap. The i5 barely gets pushed and 16GB RAM is overkill for my use however, the power usage is down, it's quieter, I have more storage space and it gives me so much more flexibility in the future.
 
Soldato
Joined
18 Oct 2002
Posts
3,512
Location
UK
Quietness is potentially an often forgotten about aspect if you go DIY. If you're going to be within ear shot of it every day then I think it pays to spend on a case and fans/cooling that keep the noise down. My main Unraid box sits next to me all day and I'm so glad I put it in a Fractal Design R6 XL with decent quiet case fans spinning at low RPM. It just gently hums and is unobtrusive. This only really became apparent when I rigged up a second smaller server in a Microserver N54L. Fire that up and I couldn't cope with the noise all day even though it isn't exactly hugely loud. Thank goodness it only need to come on once in while!
 
Soldato
Joined
17 Jan 2005
Posts
8,555
Location
Liverpool
My old server wasn't particularly loud but I could definitely hear it. It sits in the cupboard at the back of my man cave where the boiler is and when I was in there I could hear it whirring away. It was in an ancient rattly Antec case that was probably about 20 years old and having 9 drives constantly spinning seemed to resonate in it and through the floorboards. I recasing my main PC anyway so thought I'd repurpose my old Fractal case for the server and it's whisper quiet now.
 
Associate
OP
Joined
26 May 2008
Posts
227
Thanks for all the replies, lots and lots to think about.

I have been having a look at parts based on this discussion. I am happy to build, I have put together my own desktop PCs but never a sever. Anyway, predictably I am over budget and I have not yet read into the compatibility of these parts in depth. Any thoughts on where I should look to cut costs??

Processor
Ryzen 7 3700X Eight Core 4.4GHz - £299.99
Ryzen 5 3600 Six Core 4.2GHz (Socket AM4) Processor - £179.99

Motherboard
X570-A Pro (AMD AM4) DDR4 X570 Chipset Motherboard - £159.95
Asrock X570M Pro4 (AMD AM4) DDR4 X570 Chipset mATX Motherboard - £179.99

PSU
Corsair - RM Series RM650 650W '80 Plus Gold' Modular Power Supply (CP-9020194-UK) - £87.95

Ram
Corsair - Vengeance LPX 32GB (2x16GB) DDR4 PC4-24000C16 3000MHz Dual Channel Kit - Black (CMK32GX4M2D3_ - £149.99
Kingston - HyperX Fury 32GB (2x16GB) DDR4 PC4-28800C18 3600MHz Dual Channel Kit - £169.99

Case
Be Quiet Pure Base 500 Midi Tower Case - Black - £69.95

Hard drives

SSD - Samsung - 970 EVO Plus 500GB M.2 2280 PCI-e 3.0 x4 NVMe Solid State Drive - £89.99
HDD - WD 8TB Red Pro 7200rpm 256MB Cache Internal NAS Hard Drive (8003FFBX) - £259.99 x2 = £519.98

Unraid - £120 - Pro

Its around £1400+ with the cheaper parts, where there is a choice. I could look around for a cheaper motherboard and probably ram.

Anyway, other things not on the list are a dedicated GPU, do I need one? raid controller? cooling - air, water? Anything else I have forgotten? Thanks!
 
Soldato
Joined
18 Oct 2002
Posts
3,512
Location
UK
Honestly I think others are better placed to comment with valid opinions as I’m probably too far removed from current kit but I’ll say:
  • Red pro WDs are the most expensive way to get disks in. Replacing with shucked WD elements of the same capacity saves you £250 for example. Or maybe more comfortable for you save yourself £100 and get retail Seagate Ironwolfs (as I think regular WD reds are SMR)
  • Unless you want to significantly future proof or plan on running a lot of VMs that seems like a lot of CPU horsepower. I don’t know what to recommend instead but I have some ancient i5 and I’ve never had a CPU bottleneck problem running 22 dockers on my server (including most of the things you want to do) and 1 Windows VM
  • I’ve no idea how RAM speed/performance affects Unraid so maybe there’s a few quid to be saved on cheaper memory
  • A regular SSD has never caused any slow down with my cache so not sure if you need the extra expense of an nvme
  • I don’t know about the CPU draw but 650W seems like a lot for your use case. Maybe you can get away with 550W?
Notwithstanding the disks, I acknowledge on their own none of the above saves that mich but add it all up and it should bring you back under budget.
 
Soldato
Joined
29 Dec 2002
Posts
7,252
In this context, there is no real difference between a ‘server’ and a ‘desktop’ other than its intended use. Spec wise you seem to have gone relatively high end/new, other than possibly Roon, I can’t see anything overly demanding that suggests it’s required? I mean a used 1600af/2600 isn’t massively different in power/performance for this usage than a 3600 and is half the price, same with a B450 (choose a 4xRAM version), you can buy a new board and used CPU for less than a new 3600, also no GPU? RAM wise it depends on what you will run, more is easy to add, but I wouldn’t be buying less than 16GB sticks at this stage. UnRAID licences are upgradable, you don’t need pro for two drives, so can save on that initially. PSU wise you ideally want to run within the optimum range, 650w for a system that will idle at 10% of that most of its life may not be as optimal as it could be.

Storage wise I shuck, it’s cheap and you can get better quality drives, but test them before removing them. WD Red’s and plus are over rated and have been for years, even before they blatantly did bait and switch. HGST or Toshiba were always better.
 
Associate
Joined
28 Mar 2019
Posts
1,117
Location
Channel Islands
In this context, there is no real difference between a ‘server’ and a ‘desktop’ other than its intended use.

Depends, I personally believe in server parts as being designed for continuous use. Although many desktop parts can do this, the server customers are extremely reliability focused.
I always use ECC and RAID/ZRAID for my long running systems which I use for either self-hosting or backup.

Storage wise I shuck

Agree though that if there is little difference, don't pay more. Have a stack of 10TB drives behind me which are beggin for a shuckin.
 
Soldato
Joined
29 Dec 2002
Posts
7,252
Depends, I personally believe in server parts as being designed for continuous use. Although many desktop parts can do this, the server customers are extremely reliability focused.
I always use ECC and RAID/ZRAID for my long running systems which I use for either self-hosting or backup.

Agree though that if there is little difference, don't pay more. Have a stack of 10TB drives behind me which are beggin for a shuckin.

People choose to believe in all sorts of strange and wonderful things, it doesn’t always make them true or universally applicable. Anecdotal evidence accumulated over the last 30+ years of building/maintaining hardware and running it 24/7 (personally and professionally) suggests enterprise kit doesn’t generally do any better long term than consumer grade kit, admittedly you can find bad examples of either and retail OEM’s have a nasty habit of abandoning hardware as soon as they bring a newer version out, while enterprise OEM’s generally maintain a product over its lifecycle. While I sometimes choose to run ECC, dual PSU’s with dual UPS’s and failover network interfaces on independent switches simply because I can, non of that is really likely to be significantly beneficial for someone who just wants a reasonably reliable home server to handle storage/docker/VM’s.
 
Associate
Joined
28 Mar 2019
Posts
1,117
Location
Channel Islands
People choose to believe in all sorts of strange and wonderful things, it doesn’t always make them true or universally applicable. Anecdotal evidence accumulated over the last 30+ years of building/maintaining hardware and running it 24/7 (personally and professionally) suggests enterprise kit doesn’t generally do any better long term than consumer grade kit

It depends, I tend to select features I know will do something, I have every reason to believe that ECC isn't some scam or exaggeration. I'd happily sacrifice 1%-5% gain in performance you get from consumer RAM for the peace of mind that I won't cause problems for my file system after sitting running in my server room for 2 years.

Also don't misunderstand, I agree that the likelihood is that something will go very wrong which is preventable by this sort of kit is a rare, perhaps even inline with your anecdote, but my data is irreplaceable, so that why I have backups, RAID, ECC, ect. If disaster strikes I can buy new kit, but I can't buy back my data (assuming it's unrecoverable).
 
Associate
OP
Joined
26 May 2008
Posts
227
Thanks again for the replies. This has given me lots to think about and I have been doing just that over the weekend. Mainly thinking could I shuffle or re-purpose any current kit, like my desktop PC (transferring old nvidia 970card from PC), or a i7 NUC currently used an exclusive Roon core (music server), my current NAS. I realised that I would be duplicating functionality with an Unraid box and my existing Synology DS918+ NAS. Therefore I could de-commission the NAS and pull the hard drives. I have three 8tb WD Pro drives (probably where that choice came from earlier) in there currently, and these could go into the Unraid machine and I could sell the NAS. The new Unraid machine could go into my server cupboard (attached to my ups), and I could back-up my PC, phone, tablets to the Unraid machine - as mentioned, I like the idea of Nextcloud to do this aspect.

I would still like to backup the Unraid machine to some off-site cloud service, I currently use a Synology service but I know there are lots of others.

So, I think I have moved away from idea of a the Unraid machine running anything to do with my router, I would keep that seperate. Ditto for the pihole setup on a rpi3b.

I am currently thinking -
  • Nextcloud server (possibly running on ubuntu server or similar) to backup desktop machine, tablet, phones, laptop (especially photos)
  • Plex media server
  • Roon server - maybe, I could then sell the NUC??
I know it has been mentioned that I have gone high in terms of spec, I always like to plan for stuff I have not thought of yet, so always go over spec.
 
Soldato
Joined
29 Dec 2002
Posts
7,252
It depends, I tend to select features I know will do something, I have every reason to believe that ECC isn't some scam or exaggeration. I'd happily sacrifice 1%-5% gain in performance you get from consumer RAM for the peace of mind that I won't cause problems for my file system after sitting running in my server room for 2 years.

Also don't misunderstand, I agree that the likelihood is that something will go very wrong which is preventable by this sort of kit is a rare, perhaps even inline with your anecdote, but my data is irreplaceable, so that why I have backups, RAID, ECC, ect. If disaster strikes I can buy new kit, but I can't buy back my data (assuming it's unrecoverable).

ECC is a nice to have in inherently reliable technology, in circa 30 years of spec'ing and using ECC I have literally seen two or three events where it was actually of any benefit, it's a silent insurance policy that most people will never need/use, as you point out, other areas are much more likely to fail and result in data corruption/loss if a proper backup regime isn't in place. Of course it would be better if it was standard, but Intel likes to segment markets for the sake of milking it's customers.

Thanks again for the replies. This has given me lots to think about and I have been doing just that over the weekend. Mainly thinking could I shuffle or re-purpose any current kit, like my desktop PC (transferring old nvidia 970card from PC), or a i7 NUC currently used an exclusive Roon core (music server), my current NAS. I realised that I would be duplicating functionality with an Unraid box and my existing Synology DS918+ NAS. Therefore I could de-commission the NAS and pull the hard drives. I have three 8tb WD Pro drives (probably where that choice came from earlier) in there currently, and these could go into the Unraid machine and I could sell the NAS. The new Unraid machine could go into my server cupboard (attached to my ups), and I could back-up my PC, phone, tablets to the Unraid machine - as mentioned, I like the idea of Nextcloud to do this aspect.

I would still like to backup the Unraid machine to some off-site cloud service, I currently use a Synology service but I know there are lots of others.

So, I think I have moved away from idea of a the Unraid machine running anything to do with my router, I would keep that seperate. Ditto for the pihole setup on a rpi3b.

I am currently thinking -
  • Nextcloud server (possibly running on ubuntu server or similar) to backup desktop machine, tablet, phones, laptop (especially photos)
  • Plex media server
  • Roon server - maybe, I could then sell the NUC??
I know it has been mentioned that I have gone high in terms of spec, I always like to plan for stuff I have not thought of yet, so always go over spec.

Doubling your spend now in anticipation that you might use that functionality when it's likely to cost significantly less to upgrade later, if needed isn't the way i'd generally go if I was having budget issues, but if you're happy, it's your money. Networking choice makes sense, 970 has has issues with H265 decode support, Plex only outputs transcoded content at 1080/8 in H264 at present, may want to stick to H264 or look at a P400/M2000 as mentioned, drive wise as long as you have enough space to remove a drive, then that would would work, or just buy an additional drive to start copying to, once full, remove another one from the Synology, obviously use something that verifies source and destination data. Roon wise it wold be so easy to just run it in docker, I believe at one point they did or perhaps still do offer one for Synology, but the advice is to VM it. That said unofficial docker images exist and people use them, so one way or another it will work.
 
Associate
Joined
20 Apr 2009
Posts
1,237
I know it's been said already, but another +1 for UnRAID from me.

The one thing is life to change about my UnRAID setup is the size of my SSD. I stuck in a 120gb SSD I had sitting in a box, then realised that actually it does more then I'd appreciated. I've upped it to 500gb, but really need more - especially if moving lots of media. With current prices I'd look at 1tb as a minimum. Also, I don't see an advantage of using the very fastest M.2 SSD around, as it'll always be limited by the network or the array. I could be wrong though - happy to be corrected.

I have 4 x Reds in my UnRAID (Dell T20) and 8 x Exos (retail) in my Qnap (TVS-872XT) and next time I'll certainly be shucking Seagate externals to get the cheap Exos.
 
Back
Top Bottom