Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

Soldato
Joined
29 Dec 2002
Posts
7,253
Because power consumption doesn't scale in a linear manner relative to rpm at a guess. Either way 15k in a home environment unless you have it in an outbuilding like a garage probably isn’t fun times.
 
Last edited:
Associate
Joined
18 Aug 2020
Posts
145
Location
Watford, UK
There's now 576 pages, can I join Cyber-Mav in asking for short digest of max configs available for those beauties?

He has gen7;
I have two gen8 servers, for which I just purchased Intel Core i3-3240 CPU's, read in BIOS notes on HPE it is supported, so hope for it to be compatible and big improvement ??:confused: :(
I would think max RAM you've been able to cram into them, BIOS updates, anything else required/available/possible on maybe even non-official but confirmed sources?
Would appreciate..
 
Soldato
Joined
29 Dec 2002
Posts
7,253
Anyone converted the top 5.25 bay to a hot swap 4x2.5inch drive thingy?

Yes, and it’s not that simple, unless you like messing about or it being capped at less than 1/4 of your headline speed.

It’s a bog-standard 5.25” bay, you would need a way to provide an additional SAS or 4 SATA ports (and power), you can use the eSATA for one, the internal SATA for another, that leaves the interns PCIe for a 2 port card, or just buy a 4 port card. Here’s where your problems start... each of those 4 drives should be capable of pulling 500MB/s R/W numbers, if you go with a 4 port card, that PCIe2.0 x16 slot will be limited to the cards form factor, most cheap cards are going to be designed for PCIe3 x1, which is 1GB/s in each direction, or more than enough for 4 mechanical drives in the real world where sequential is usually less irrelevant. PCIe2 x1 is supported, but roughly 500MB/s in each direction. While that’s fine for 1 SSD, or a few mechanicals, it’s not great for 4 SSD’s in sequential terms, but latency will be fine.

So then it becomes a question of money and performance, a cheap 2TB NVMe drive starts at £150ish, add a x4 or full slot adapter for under £10, that gives you 2TB of sequentially fast storage at 2GB/s. It’s £100+ for a 4x2.5” backplane and card/cables, you can add more drives, but it’s slower and the drive £/GB isn’t that different. If you use the internal/external SATA options you can get higher bandwidth (think one of the 3rd party BIOS’ brought higher bandwidth on the internal port?), but it’s still slower.

It all depends on the workload, but at face value it’s going to be cheaper to go NVMe and it’s faster.
 
Don
Joined
19 May 2012
Posts
17,179
Location
Spalding, Lincolnshire
Here’s where your problems start... each of those 4 drives should be capable of pulling 500MB/s R/W numbers, if you go with a 4 port card, that PCIe2.0 x16 slot will be limited to the cards form factor, most cheap cards are going to be designed for PCIe3 x1, which is 1GB/s in each direction, or more than enough for 4 mechanical drives in the real world where sequential is usually less irrelevant. PCIe2 x1 is supported, but roughly 500MB/s in each direction. While that’s fine for 1 SSD, or a few mechanicals, it’s not great for 4 SSD’s in sequential terms, but latency will be fine.

And don't be tempted to get something like a HP P222 smart array. I tried one in a G7 and it was impossible to keep it cool (well unless you fancy ghetto modding some noisy server fans in)
 
Don
Joined
19 May 2012
Posts
17,179
Location
Spalding, Lincolnshire
hmm, how about a lsi 9211-8i? can plug the internal drive bays sas connector to the card and get sata3 speeds then and have 1 mini sas port free for another 4 drive rack at the top?

It's a similar idea to that I had with the P222 (although I was going to use that for hardware raid of the internal drives), I think even with the less complicated LSI card you will stuggle to keep it cool - it's right up against the side of the chassis in the G7, so other than cutting a hole in the side, there's not much option for cooling.

I tried I think it was a 60mm fan mounted at the front blowing towards the card, and it made little difference as it didn't move enough air
 
Soldato
Joined
30 Jul 2005
Posts
19,434
Location
Midlands
just googled n40l and sas cards came up with loads of people on home server forums who have done it. yes its probably gonna run hot but not sure if its to the point of crashing. also why is yours up against the side panel did you not put it in the 8x slot thats next to the 16x slot? or am i being deceived here and the 8x slot (at least i think its 8x got to be 4x or more in the n40l) is not fully wired to 8x speed but lower?
 
Don
Joined
19 May 2012
Posts
17,179
Location
Spalding, Lincolnshire
just googled n40l and sas cards came up with loads of people on home server forums who have done it. yes its probably gonna run hot but not sure if its to the point of crashing. also why is yours up against the side panel did you not put it in the 8x slot thats next to the 16x slot? or am i being deceived here and the 8x slot (at least i think its 8x got to be 4x or more in the n40l) is not fully wired to 8x speed but lower?

There are 2 usable slots - 16x slot is nearest the side, and then a 1x slot (which also has an offset pci-e 4x further down for the remote access card)

When I tried the P222 it was running at 90C without a cooling fan, and ~85C with a front mounted fan. SAS JBOD cards should run a little cooler than a raid card like the P222, but still uncomfortable imo.

There is probably room to fit a 40mm fan directly to SAS card heatsink, but then it's a whiny little 40mm fan that would be too noisy for my use case.
 
Soldato
Joined
29 Dec 2002
Posts
7,253
hmm, how about a lsi 9211-8i? can plug the internal drive bays sas connector to the card and get sata3 speeds then and have 1 mini sas port free for another 4 drive rack at the top?

To what end? Replacing the existing SAS controller for the existing drives and running it all down one PCIe with twice as many drives doesn’t immediately strike me as an obvious advantage other than the 9211 supporting TRIM iirc. I’m not saying you can’t, but what’s your end game? It’ll still run stupidly hot with additional cables restricting air-flow. The LSI cards really need airflow, the first mod most desktop users do is a fan bracket - I say that as someone who runs two disk shelves from an H200e (9200) and has H200/310’s in use on multiple boxes.cooking drives is not my idea of fun.
 
Soldato
Joined
30 Jul 2005
Posts
19,434
Location
Midlands
Heat not a issue for me, i dont bother with noob mods like slapping fans to cards. I can sort a custom heatsink if needed and machine it to go full length of the card if i have to.
The gen7 microservers onboard sata is rated at sata2 and can't even hit much above sata1 with the drives iv used in it.
A 9211-8i will offer sata3 and does do the full speeds. Pcie v2 x8 slot is enough bandwidth since all 8 drives wont even be on at the same time let alone transfering data simultaneously.
As said before im not the first to attempt it many others run it on serve the home forums etc.
Iv had 6 x 3.5inch drives before in my gen7 and no issues with heat etc since they were mostly 5400rpm drives, os drive was a velociraptor.

Its all well known Avalon doesn't like the microserver but to just tell people to not bother with mods and to go get a bigger proper server is not really what this thread is about.
 
Soldato
Joined
30 Jul 2005
Posts
19,434
Location
Midlands
Its just some fun messing about with it especially when trapped at home due to lockdown. At some point ssds will be so cheap you could rack up a seriously huge amount of them in the gen7 with some interesting wiring.
 
Soldato
Joined
18 Oct 2002
Posts
4,073
Location
cidade maravilhosa
See above, with an adapter and a suitable OS, yes, but no direct booting.



As above, not for boot unless you come up with a workaround for the missing UEFI bios - HP didn’t roll UEFI support out till G9 server SKU’s from memory.


This was my fear, I have a few Nvme's kicking around that would be good as boot drive, but ho hum, see if I can get ESXI booting from usb and using nvmes to host systems.
 
Soldato
Joined
29 Dec 2002
Posts
7,253
Heat not a issue for me, i dont bother with noob mods like slapping fans to cards. I can sort a custom heatsink if needed and machine it to go full length of the card if i have to.
The gen7 microservers onboard sata is rated at sata2 and can't even hit much above sata1 with the drives iv used in it.
A 9211-8i will offer sata3 and does do the full speeds. Pcie v2 x8 slot is enough bandwidth since all 8 drives wont even be on at the same time let alone transfering data simultaneously.
As said before im not the first to attempt it many others run it on serve the home forums etc.
Iv had 6 x 3.5inch drives before in my gen7 and no issues with heat etc since they were mostly 5400rpm drives, os drive was a velociraptor.

Its all well known Avalon doesn't like the microserver but to just tell people to not bother with mods and to go get a bigger proper server is not really what this thread is about.

I try not to judge your random hypothetical questions over various sub-forums, but you’re way off base on this one.

First up - to use your terminology - any noob who has the capacity to machine anything should hopefully grasp that while increased mass and surface area buys you thermal capacity and time to store heat, without airflow to dissipate it, you’re only delaying the inevitable as the ambient case temp increases, you either going to need a disproportionately large heat sink, or significant air flow, as once the case ambient temp ramps up it gets nasty. This is why noobs/people 3D print fan brackets, have you ever put a finger on the heat sink of a LSI HBA while it’s being used in anger? It’s not usually a pleasant experience.

What you state is ‘widely known’ about my dislike of microservers seems to be ignore four minor issues, namely the 4 micro servers I own. My first N36L was maxed out with 10 drives, 4x 2TB WD Green’s, 4x2.5” SSD’s in a backplane (which is exactly what you asked about) and connected up eventually using an old H200 (almost exactly what you asked about) and another two drives for VM pass through. My comments are based on actual first hand experience of what you’re theorising about, just as Armageus' are.

As far as buying a bigger server, did you even read my post? I offered a suggestion about getting maximum performance for minimal cost based using what you have. I use micro servers for what they are good at, NAS duties and light VM/docker hosts. When you start considering making them into something they aren't suited to, anyone with any common sense what so ever would take a step back and consider can you achieve a better result for the same money? Well an H200e is £20, a 2m 8088>8088 mini SAS cable is £6 and a SAS6 disk shelf is £40-120 depending on what you want, they generally take SATA or SAS and don't require interposers (though if you want SSD consider a newer HBA that will support TRIM depending on how you plan on interacting with the drives), my latest SAS6 12xLFF 2U shelves (with drives and caddies) cost me £60 delivered each and arrive later this week.

I got 6 x 3.5's all under 40c and was also tempted to get another broken gen 7 and use it as a DAS with appropriate card so your not alone in wanting to mod if for fun.

Strangely I have walked this path as well, it works, but it's disproportionately expensive relative to the number of drives you get. I'll ignore the HBA as you will need one no matter what and they range from £20 upwards. You need an extra microserver, an 8088 male to 8088 male, an 8088 female to 8087 female and then 8087 male to 8087 male (short) for the backplane to get 4 drives, that's got to be circa £40 in cables for 4 bays, plus the cost of the micro server, lets say you pick up the cheapest N36L sold on eBay sales history (last 3? months), that's another £50, so now you're looking at £90/4 bays or £22.5/bay. Compare that to a £60 12 bay shelf and a £6 8088 to 8088 mini-SAS cable coming out at £66 or £5.50/bay. You drop the micro server cost a bit by just going 8088 to 8087 and passing it through the rear slot, its not as neat, but it is the cheapest option if you already have a broken micro server and just need 4 bays.

This was my fear, I have a few Nvme's kicking around that would be good as boot drive, but ho hum, see if I can get ESXI booting from usb and using nvmes to host systems.

ESXi will boot from a USB without issue, you can then use the NVMe as storage for your VM's as you would normally, this approach works well.
 
Back
Top Bottom