• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announce EPYC

Man of Honour
Joined
20 Sep 2006
Posts
33,991
The issue AMD have is the big partnerships and slogging ability that Intel have which AMD simply cannot meet. If you look at server manufacturers that actually bother to allow you to configure online, 2/10 (example) will be AMD, the rest Intel scalable. Have a look at the Dell site for example.

As a geek I really want more AMD to work with, but from a business sense Intel will still win for the forseeable, it's a shame really.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
The issue AMD have is the big partnerships and slogging ability that Intel have which AMD simply cannot meet. If you look at server manufacturers that actually bother to allow you to configure online, 2/10 (example) will be AMD, the rest Intel scalable. Have a look at the Dell site for example.

As a geek I really want more AMD to work with, but from a business sense Intel will still win for the forseeable, it's a shame really.

It's the geek in me that tipped the decision, let's be honest an Intel- intel migration would have been easier but where is the challenge in that? When all is said and done I genuinely believe that the correct decision won out. If Milan smashes it I can take a cpu out of one chassis and stick it in a spare socket on one of the other servers and then populate a single chassis with a couple of Milan skus in the same cluster.

If I do that I can still have all the good stuff like ha, drs etc but have a server tier with higher clocked/ better performing skus where I can put servers that benefit from that.

Interestingly I did find a load of new options in VMware under cpu provisioning that don't exist on the intel hosts but which seems to improve performance when selected, I'm on my way home so cant offer a screenie right now but I will screenshot the new cpu options for anybody interested.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Yes please if you don’t mind. Did you sort your Veeam issue?

Nope still isn't sorted but I changed focus a tad and plan to jump back on it when I have some spare time :) In terms of those options, everything from CPU mask down exists when editing VM's on my EPYC hosts but not on my Intel hosts. I don't necessarily think these options are EPYC specific though? Perhaps they only exist on newer chips?

 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
@ChrisD. Talk to me about Host Cache, VMDK cache etc. I'm trying to work out where is best to utilize my 6TB worth of NVME. Can they be used to boost performance on disk operations back to for example exchange, effectively acting as a flash back write cache to datastores or are they more general use across the whole host? How would you utilize them? I plan on running a few quick tests on some servers to see the impact of our san over something more modern but after that i'm not sure if they offer any significant performance advantages? Also cant answer on the why the options are not there on other hosts, all have v15 VM hardware version, all have esxi 6.7 U2. Could be EVC so will check that!

Basically trying to work out if i just fill the servers with NVME, set it up in some kind of raid and expose to esxi as some local test datastores on each host, its possible then to use that storage for snapshots but seems a waste using it for only that. My preference would be aiming for performance improvements over extended storage that is not shared between hosts.

I did my own looking last night and inspected the performance best practice guides for 6.5 and noticed talk of vFRC (page 32 of the guide) which suggests that significant disk performance improvements can be made by adding a virtual flash infrastructure layer, this all sounds very interesting but in practice what works well and what doesn't?

Oh and one more thing, if I did expose a flash infrastructure layer that was local is there then any impact on DRS and vMotion? So many questions!
 
Last edited:
Man of Honour
Joined
20 Sep 2006
Posts
33,991
@ChrisD. Talk to me about Host Cache, VMDK cache etc. I'm trying to work out where is best to utilize my 6TB worth of NVME. Can they be used to boost performance on disk operations back to for example exchange, effectively acting as a flash back write cache to datastores or are they more general use across the whole host? How would you utilize them? I plan on running a few quick tests on some servers to see the impact of our san over something more modern but after that i'm not sure if they offer any significant performance advantages? Also cant answer on the why the options are not there on other hosts, all have v15 VM hardware version, all have esxi 6.7 U2. Could be EVC so will check that!

Basically trying to work out if i just fill the servers with NVME, set it up in some kind of raid and expose to esxi as some local test datastores on each host, its possible then to use that storage for snapshots but seems a waste using it for only that. My preference would be aiming for performance improvements over extended storage that is not shared between hosts.

I did my own looking last night and inspected the performance best practice guides for 6.5 and noticed talk of vFRC (page 32 of the guide) which suggests that significant disk performance improvements can be made by adding a virtual flash infrastructure layer, this all sounds very interesting but in practice what works well and what doesn't?

Oh and one more thing, if I did expose a flash infrastructure layer that was local is there then any impact on DRS and vMotion? So many questions!
I'd suggest having a read of https://docs.vmware.com/en/VMware-v...UID-F8D3521A-5BE8-4BE2-8486-837228F29997.html

It sounds like VMware vSphere Flash Read Cache might be what you're after.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
I'd suggest having a read of https://docs.vmware.com/en/VMware-v...UID-F8D3521A-5BE8-4BE2-8486-837228F29997.html

It sounds like VMware vSphere Flash Read Cache might be what you're after.

I think I am going to go with vFRC and see what it is all about. Looks like it can have an impact on vMotion but according to all my reading you can simply discard the cache and recreate it if something goes wrong with local flash cache storage so to me it doesn't really look like there are any downsides, faster disk performance for the cost of some NVME sounds good.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
So one of my servers doesnt work, doesn't even post. Turns out it's been shipped with the wrong bios. To right this wrong I'm being sent a couple Naples chips to play with as well. I'm only planning to use them for bios updates but I might do a little head to head, naples vs rome.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Jobs a goodun, epyc cluster deployed, intel cluster decommissioned and all vm's migrated. Performance so far is very good all that's really left now is throwing extra resource at few of the slower servers.

Really liking gen 10 ilo of all things, it's been years since I've used it and its come a long way. Still some work to do on vFRC but that's a job for another day. I never thought I would see the day where I would be running AMD in production but that day is here and I am impressed.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,991
Jobs a goodun, epyc cluster deployed, intel cluster decommissioned and all vm's migrated. Performance so far is very good all that's really left now is throwing extra resource at few of the slower servers.
Just be careful that you don't overprovision, as you may make things worse.
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Just be careful that you don't overprovision, as you may make things worse.

Always gotta be careful. I've seen people over provision memory on things like exchange and watched it fall apart pretty quickly.

I've actually been reserving memory and keeping provisioning as it was on the old hosts to gauge any uplift before making any changes. I've already managed to transform the speed of one of our older DB servers which is flying now, something I just didnt have the resources to do before.
 
Soldato
Joined
31 Oct 2002
Posts
9,860
The issue AMD have is the big partnerships and slogging ability that Intel have which AMD simply cannot meet. If you look at server manufacturers that actually bother to allow you to configure online, 2/10 (example) will be AMD, the rest Intel scalable. Have a look at the Dell site for example.

As a geek I really want more AMD to work with, but from a business sense Intel will still win for the forseeable, it's a shame really.

Don't mention the Dell website here, you'll get an army of posters who claim AMD is well represented there LOL
 
Man of Honour
Joined
30 Oct 2003
Posts
13,251
Location
Essex
Don't mention the Dell website here, you'll get an army of posters who claim AMD is well represented there LOL

I wouldn't say they are particularly well represented at any OEM, they are perhaps a little further ahead than they thought they might be given the Intel shortages so that probably helped them out. The thing is they (the OEM) will make a lot more of what their customers want and broadly for a number of reasons that still sits in Intels favour even if the tides are starting to shift. From an OEM point of view and forget all the anti competition payments etc you need to cater to where your customers are and there are still the vast majority of businesses out there with Intel VM farms running all the good stuff myself and Chris have been talking about above.

If you could run a hybrid farm successfully and still keep VM technologies such as vMotion, HA, DRS etc between vendors then the uptick would imo be much faster, but the truth is you can't, or at least not yet anyway. A stat I saw the other day (ill try and dig out the link) might help to explain a bit more, what I saw was effectively a stat saying that the average farm is 300+ server vm's which in turn is on average more than x number of hosts, I really will try and look for the slide or whatever it was I saw, it could have even been at an event I went to a few weeks back (digital transformation expo) and it could have even been a stat presented by veeam or another backup vendor. I will try and validate what I am saying but I just need to wrack my brain a bit. Anyway, if the stats are to be believed then you already have this massive VM footprint all running on Intel where you can't simply scale out in the same cluster with AMD as well, you either have to start a new cluster and manage two (where you cannot one touch migrate between the clusters) or you just buy Intel, chuck it into the cluster and scale out. This is what AMD are up against, nobody uses physical boxes as servers these days where you have any volume of business it just doesn't make sense, it's all hosts for VM's and If you have 30 Intel hosts and you need horse power are you going to buy a whole new cluster of say 25 new AMD hosts and kick off a migration project that will include down time, or might you just take the easier option of adding another Intel host and just clicking a few buttons to add it to an already operational cluster?

For me, I have a small farm of <100 servers, two locations, and a co-lo in a DC, because of that I have a bit more flexibility when it comes to refreshes and the numbers (because we are an SME) do hold water where as in large organisations with heavy Intel investment that may not hold true yet. I also have the luxury of support from stakeholders in the choices I make, so I need to be confident in my ability to deliver with the support of my team where as in other businesses choices like what vendor to use for the hosts might be more heavily scrutinised especially if that comes with kicking off a migration project like mentioned above. I do slowly see the tides changing but in this space with a minimum of 5 year refreshes I can sort of understand why AMD are still not as well represented as they deserve to be at the OEMs, it is also true to say that not all IT managers are equal, some aren't even interested in tech so in many organisations even with the current TCO it really might not be a super easy sell.
 
Last edited:
Back
Top Bottom