Linus Tech Tips has an interesting video about the problems they had when they maxxed out the PCIe lanes on their Epyc server with umpteen NVME drives
TLDR There are major performance issues; it's all too fast and the bandwidth is overloaded. I'm wondering if Intel solutions have the same problems?
Put simply they built a system with Mass Storage that is
faster than RAM.
You need to watch the first part of this build video to understand what is going on. Its simply a case of EPYC CPU's having so many PCIe lanes its possible to Raid so many NVMe drives together the speed of that is faster than DDR4 can keep up with causing transfers between memory and drives to stall out, because the memory cannot keep up with the speed of the drives.
They Raid 24 NVMe drives. They were pushing storage transfer rates of near 30GB/s. that's faster than the memory can keep up with, or at least its about equivalent to DDR4 3800MT/s
Intel's CPU's don't have anywhere near as many PCIe lanes so its not possible to raid 0 anywhere near as many drives so Intel CPU's cannot possibly get anywhere near the speed to cause the memory to stall.
If you actually watch the video they explain what the problem is and the cure is to slow the transfer rates down, to a speed that's probably still faster than Intel can manage.
This is the video where they built this monster....
Edit: if they would use Page Filing they would get higher memory performance than using RAM
Edit2: the CPU has 128 PCIe4 lanes, 24 NVMe drives running with 4 PCIe lanes is using 96 PCIe lanes. Memory is 4TB 8 Channel at 3200MT/s.