Proxmox

Soldato
Joined
9 Mar 2012
Posts
10,072
Location
West Sussex, England
Am tentatively planning a new build later this year when X670 and the next Ryzen CPU's release and thinking of putting together a home lab based on Proxmox.

Anyone done this and can you offer any advice with regards to using Proxmox as a ZFS NAS device and GPU pass through to a Windows VM as the main two core features for the build?

Originally I was thinking I would have to use something like FreeNAS inside a Proxmox VM but I think that was before realising that ZFS is in Proxmox anyway. Is it possible to just use Proxmox as the NAS? I don't want to use specifically a NAS OS as I want a home lab that can serve the purposes of being a NAS and creating some VM's and Docker containers. I didn't like the VM / jails side of things in FreeNAS so that's why I wasn't considering running FreeNAS on bare metal and I don't think it does GPU pass through.

Am also a bit confused over overall drive configurations such as using a pair of SSD's or M2 drives for the Proxmox installation, some form of Raid-z? Should this only be big enough for the Proxmox installation or can other stuff be stored on this too such as VM's? Can / should any SSD's be used to cache the spinning rust?

I don't need masses of storage, I was thinking a few 4TB drives but realise I'd loose some of these to protecting this array. What would be the sweet spot for ~8TB of usable space and could this be added to fairly easily if this requirement were to grow?

Thanks in advance for anyone's insights...
 
Last edited:
Associate
Joined
14 May 2009
Posts
2,298
Sorry I can't help but I'll watch this topic with interest. I do run a Proxmox server running several VMs, piHole etc. but not using it as a NAS/Storage. As my hardware is also a 1u rackmount server I got 2nd hand I guess the GPU pass through is not something I can do anyway.
 
Soldato
OP
Joined
9 Mar 2012
Posts
10,072
Location
West Sussex, England
Sorry I can't help but I'll watch this topic with interest. I do run a Proxmox server running several VMs, piHole etc. but not using it as a NAS/Storage. As my hardware is also a 1u rackmount server I got 2nd hand I guess the GPU pass through is not something I can do anyway.

No worries. I looked at rack mount cases but as this is for a small home office I've decided to repurpose my existing case (Fractal Design Midi Tower) which has a fair amount of 3.5" drive trays. I had some ideas for rack mount 4u cases but as I have a sit stand desk I'd have to build quite a bespoke desk pedestal that could accommodate the leg of the desk.

Do you have your Proxmox installed on a pair of drives in Raid-z1 and do you have your VM's installed on these too or on separate drives?
 
Soldato
OP
Joined
9 Mar 2012
Posts
10,072
Location
West Sussex, England
Some further background...

Originally I was looking for a dual build midi case such as a Phanteks P600S but didn't like the fact that the second system was confined to a MITX size. It would mean building the server on a MATX as more choice available in that size which would leave the separate desktop build being confined to a MITX. Part of wanting the P600S was to have a windowed case that I could build the main system to look nice with some RGB but this would largely not be possible with the server build being in the vertical position on view, except I could mount the GPU vertically in front of part of it. I then also thought if this was going to house a number of disks that a windowed cases wouldn't be so sound proofed. I was then given the idea of rack mounting in a different thread in the cases forum and did quite a bit of research into rack mounted cases (4U). The Logic Case SC-415A seemed to fit the bill on value and has the option for 5 or 10 hot swap bays depending on whether you were going to include a GPU or not. However, if I had two of these I would need to custom build a desk pedestal out of plywood to match the desk and to get the size of the 4U cases under the desk would need to side mount them so each could extend back under the desk and either side of the desk leg. Providing I can run Windows in a Proxmox VM with GPU pass through it doesn't seem like I would any longer need two completely separate systems.
 
Associate
Joined
14 May 2009
Posts
2,298
Do you have your Proxmox installed on a pair of drives in Raid-z1 and do you have your VM's installed on these too or on separate drives?
I really am a NOOB at all this. From memory Proxmox itself is installed on RAIDed 160GB SAS drives, the RAID being handled by the PERC controller. I used the default file system so I suspect it's not ZFS.

I have a couple of other drives in the box and I've tried to share the VMs across these. The case only has 2.5" bays so essentially all the drives have been old ones I had lying around with the exception of the 2 SAS ones which came with the server and a 2TB one I've recently added. Intention is to probably put OwnCloud on this but I'm a bit busy with other stuff so not got around to it yet :(
 
Associate
Joined
9 Jun 2004
Posts
1,400
I have used Proxmox on my home server for years. I have 3 volumes shared on my network via Proxmox for videos, tvheadend recordings and general storage. So it's a great platform for a NAS.
I'm also running 8 virtual machines. I don't use ZFS so can't really give any advice on that. I understand that plenty of people do though.

You can have multiple drives for the OS and/or storage, they can be combined using LVM. You can easily specify what is stored on them too (iso images, virtual machines, backups, etc.). Backing up VM's, snapshots are a doddle.

I haven't tried passing through a GPU but I pass through a TV tuner to a tvheadend VM and it works flawlessly. I don't have any windows VMs and I suspect that a GPU would be a bit more involved though.
 
Soldato
OP
Joined
9 Mar 2012
Posts
10,072
Location
West Sussex, England
thanks for your insights @picnic @Buffalo2102

What I'm a bit confused about and can't find details for is does Proxmox benefit from being installed on an SSD or M2 drive and can that drive / should that drive also be used for storing VM's? If I use a couple of SSD's or M2's in ZFS Raid-z1, can this be partitioned to provide space for VM installs? Am just thinking of the current limitation on the number of M2 motherboard slots and whether I could buy two very large M2's but if the Proxmox install doesn't really benefit from the extra speed I could put that on an SSD ZFS Raid-z1. If you can mix the Proxmox and VM installs on the same M2's then I think this would be preferable as it would leave the most number of SATA ports free.

For the HDD's if these are configured in a ZFS pool would / can the performance be enhanced with an SSD cache of some type?
 
Associate
Joined
9 Jun 2004
Posts
1,400
Yes, installing on a fast drive will definitely benefit the performance.

When you install Proxmox it will partition the OS drive with a couple of boot/EFi partitions and the rest as a LVM partition. By default, the OS is installed on the LVM partition and the VM and container images too. I strongly recommend having some secondary storage to store the backups though.

A screenshot of mine:



As you can see, the vm images and container images are on the local-lvm and the backups are on a separate storage volume.
 
Associate
Joined
8 Jul 2010
Posts
833
Location
Staffordshire
For the HDD's if these are configured in a ZFS pool would / can the performance be enhanced with an SSD cache of some type?

The answer to that really depends on how you'll be using the data and how much RAM you have but the general rule of thumb is that you shouldn't bother with or might not see much beneift from an L2ARC if you have less than 64GB of RAM.

ZFS caches data in the "ARC", which is stored in RAM. When the RAM / the ARC is full and new data needs to be cached, unused data is removed from the ARC. If you install an L2ARC, when data is removed from the ARC, it will be stored in the L2ARC, or Level 2 Adjustable Replacement Cache. As SSDs are slower than RAM, adding more RAM is always preferalable to adding a slower SSD based L2ARC. One other important consideration to bare in mind in regards to L2ARCs is that the index of the files on the L2ARC is stored in RAM. In other words, L2ARCs require/take up additional RAM space, taking it away from the faster ARC. So adding an L2ARC can actually hurt performance. If you've maxed out your RAM, then it's generally recommended that and L2ARC should have about 5 to 10 times the capacity of your RAM.

When your server reboots, the L2ARC will be cleared of all cached data.

The other type of ZFS cache is called a ZIL, or ZFS Intent Log. ZILs are synchronous write caches and as such, only certain work loads will benefit from the presence (databases for example). ZILs are much smaller at just a few gigabytes and should be mirrored... which brings me to the nit I have to pick!

RAIDz1 = RAID 5... kind of.
Mirrored VDEVs = RAID 1.

When your server reboots, the data on the ZIL remains, just as it does on your main storage pool but just like your main storage pool, if the ZIL disk / VDEV fails, you lose all the data on it. Which is why ZILs should be Mirrored or have parity (RAIDz1, 2 or 3).
 
Soldato
OP
Joined
9 Mar 2012
Posts
10,072
Location
West Sussex, England
thanks @Buffalo2102 is that screenshot just showing pools? Bit confused what the local one is, is that just a completely separate VM you didn't want on local-lvm? I'd be using some NAS HDD for some form of storage raid.

thanks @Doctor McNinja, just at the preliminary stages of planning a new build but waiting on X670, Ryzen 3 / Vermeer CPUs and some 3200 MT/s Unbuffered ECC DDR4 ram which I think is coming later this year.



My thoughts so far based on the Asrock Rack X570D4U;

4 DDR4 ECC UDIMMS (up to 32GB per DIMM) - this needs clarification that motherboard/CPU support ECC error reporting

1 PCIe 4.0 x16 - GFX
1 PCIe 4.0 x1 - blocked by GFX card in x16 slot
1 PCIe 4.0 x8 - unused*

* so ideally if I was going to have an L2ARC I'd need a NVMe PCIe adapter here to get the next best speed after RAM with another M.2 drive, possibly also one that is PCIe 4.0 gen. I'm hoping this slot on an upcoming X670 variant becomes x16 with bifurcation support which would then support up to 4 M.2 drives here.


M.2 (PCIe4.0* x4 or SATA 6Gb/s); Form factor: 22110/2280/2260/2242 [CPU] - Pioneer M.2 (PCIe Gen 3 x 4) APS-SE20G [TLC] size to be decided
M.2 (PCIe4.0* x4 or SATA 6Gb/s); Form factor: 2280/2260/2242 [X570] - Pioneer M.2 (PCIe Gen 3 x 4) APS-SE20G [TLC] size to be decided

When I was looking at M.2 drives I came across an article (https://www.overclock.net/forum/355...d-value-ssds-2019-1tb-phison-e12-toshiba.html) that was recommending those that had a Phison E12 controller not sure if this is still the best, seemed also that some models can have different specs that may have substituted the controller for a different one too. The Pioneer's seem like a reasonable choice APS-SE20G [TLC]. Now I was thinking two of these mirrored but does the Proxmox install process allow you to establish a software RAID by selecting the two M.2 drives during the OS install or should the motherboard be doing this? These would be for Proxmox OS and VM's with backups being stored on the spinning rust VDEV1.


SATA3 #1 - 4TB IRONWOLF NAS 5900RPM - VDEV1 (RAIDz2)
SATA3 #2 - 4TB IRONWOLF NAS 5900RPM - VDEV1 (RAIDz2)
SATA3 #3 - 4TB IRONWOLF NAS 5900RPM - VDEV1 (RAIDz2)
SATA3 #4 - 4TB IRONWOLF NAS 5900RPM - VDEV1 (RAIDz2)
SATA3 #5
SATA3 #6
SATA3 #7
SATA3 #8

~8TB usable storage on VDEV1 with two drives providing parity protection?


ZIL & SLOG still need to do some research here. These seem like a good idea for added protection from power outage for instance but the later seems to be recommended over the former and presumably people don't have multiple Intel Optane drives for a SLOG so don't you then loose the added protection of something like a RAIDz1 on it?



Research sources:

RE: faster ECC memory, https://www.anandtech.com/show/15586/atps-ddr43200-industrial-dimms-up-to-128gb-12v-for-amd-intel

RE: primer on ZFS, https://www.servethehome.com/an-introduction-to-zfs-a-place-to-start/

RE: original mITX option: https://www.servethehome.com/asrock-rack-x570d4i-2t-amd-ryzen-server-in-mitx/ & https://forum.level1techs.com/t/asrock-x570d4i-2t/154306/73 & https://www.asrockrack.com/general/productdetail.asp?Model=X570D4I-2T#Specifications & https://www.anandtech.com/show/1545...570-motherboard-with-intels-10-gbe-controller

RE: original X470 mATX option: https://forum.level1techs.com/t/asrock-rack-x470d4u2-2t/147588/100 & https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications & https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications & https://www.servethehome.com/asrock-rack-x470d4u-amd-ryzen-server-motherboards/

RE: current X570 mATX selection: https://forum.level1techs.com/t/asrock-x570d4u-x570d4u-2l2t-discussion-thread/159996 & https://www.asrockrack.com/general/productdetail.asp?Model=X570D4U-2L2T#Specifications & https://www.asrockrack.com/general/productdetail.asp?Model=X570D4U#Specifications
 
Associate
Joined
9 Jun 2004
Posts
1,400
The screenshot shows the storage in my datacentre (proxmox server). I have a single SSD for the OS and VM/container storage. The 'local' entry is just a directory on that SSD (/var/lib/vz) that is assigned to store container images. The 'local-LVM' is the LVM volume on the same SSD and it contains VM and container images. The 'storage' entry is a directory that I have assigned on a separate mounted HDD where I store my VM and container backups, plus OS iso OS images. It's all configurable, depending on your available storage.

As for the rest of it, your build is waay more complex than mine so I'm not sure I can help otherwise. I guess you are looking at getting the best performance for that Windows GPU passthrough but for anything else it seems like overkill for a homelab/NAS (to me).

Good luck.
 
Soldato
Joined
10 Oct 2005
Posts
8,706
Location
Nottingham
We're using Proxmox at work with lots of LXC containers and some VMs. To be honest, as a virtualisation platform I much prefer vSphere particularly on the network virtualisation side.

I may be slightly biased though as I don't have a high opinion of the entire the solution being put in on the containers.
 
Back
Top Bottom