• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Soldato
Joined
30 Jun 2019
Posts
7,875
I think for every 4 CU's there will be a ~10% increase in performance (based on the CU and performance difference between the Navi 5700 and 5700 XT), compared to a 5700 XT.

That would mean a 64 CU GPU could be upto 60% faster than a 5700 XT (40 CUs).
 
Last edited:
Soldato
Joined
12 May 2014
Posts
5,235
AMD has just had a patent granted for Adaptive Cache Reconfiguration Via Cluster. The filing date was March 2019, so this could very well be in RDNA2.
https://www.freepatentsonline.com/y2020/0293445.html

Nerdtechgasms tweets about it https://twitter.com/nerdtechgasm/status/1306917807880200193
Alright, basic gist, makes the new shared L1 in RDNA more effective (higher hit rates) by dynamically clustering CUs together, to pool their cache lines, to minimize duplication of cache use (+effective capacity) and increase hit rate. This reduces pressure on LLC/L2.

This makes the L2 more effective, thereby reducing pressure on memory bandwidth. This kind of design is enabled by the change in RDNA 1, with a 128KB shared L1 for the CUs within a shader array.

I still think the 128KB is too small for so many CUs, so one of the key changes I expect to see in RDNA 2 is at least 2x L1 size. Combine with this dynamic L1 changes, would be much more potent cache system, both L1 & L2 effective capacity & hit rate up.

Think of it as a multiplier for L1 & L2 cache, less duplication makes it "effectively bigger" & increasing hit rate makes it more efficient. Both of these will impact memory bandwidth efficiency (improve it). I like seeing these types of uarch changes, work smarter, not harder.

Here is a paper on the concept https://adwaitjog.github.io/docs/pdf/sharedl1-pact20.pdf

A snippet from the paper
Section 5.2
DynEB enhances IPC by 22% on average over the private baseline. (Graph shows a max of just below 60% increase in IPC).

Overall, DynEB improves performance of all evaluated applications by 9%.

Therefore, DynEB improves performance-per-watt by 9% and the energy efficiency (performance-per-energy) by 20%, on average across all evaluated applications. For the shared-friendly applications, DynEB maintains the total power consumption (similar to baseline) and saves energy by 18%. Therefore, DynEB enhances performance-per-watt and energy efficiency for the shared-friendly applications by 22% and 49%, respectively.

Section 5.3
Effect of L2 Cache Size
.We evaluate a boosted private L1 baseline with double the L2 cache size. We observe almost no performance improvement for the shared-friendly applications compared to the baseline. This is because performance is limited by the L2 reply bandwidth bottleneck [49,73,74]. Such a bottleneck is relieved with Shared++ and DynEB as the shared L1 organization utilizes the remote cores as an additional source of bandwidth.

We observe higher IPC improvement under increased core count because the overall L1 capacity increases with more cores, thus the available collective L1 bandwidth increases under shared L1 organization.
 
Soldato
Joined
30 Dec 2011
Posts
5,442
Location
Belfast
Gibbo said 1,000 flew off the shelves

No, he said they had 1000 of the special bundles deals and they ran out, which could include pre-orders. To use an extreme example, they have 100 physical GPUs and 900 pre-orders which would mean all 1000 bundle deals are gone.

I'm not saying that exactly what happened but let's not delude ourselves that the 3080 launch was an unmitigated success. It was a shambles to be frank.
 
Soldato
Joined
12 Mar 2006
Posts
22,990
Location
N.E England
Im going to hold off for AMD. Another month or so which is frustrating but will wait after the NV farce. AMD really need to get these chips baking and have oodles of stock. They could have a win here. Play it right AMD
 
Soldato
Joined
30 Jun 2019
Posts
7,875
I dont think we will hear much from AMD until the RTX 3070 and 3060 are launched. They are stalling / waiting at the moment I think.

They say they have their own schedule, I dont think that reflects what we are seeing, they are reacting to Nvidia mostly (and preparing for the console launches too).
 
Last edited:
Associate
Joined
21 Apr 2007
Posts
2,485
Yes performance is important but you have no idea how efficient an RDNA2 CU is. The number is just a number to us currently. 60CUs might be enough or it might not be but why does that number matter so much as long as performance is there?
thats ur assumption, I have an idea what the performance is you can’t tell me what I do and don’t know what sort of non-sense is that. Anyone can easily estimate the performance of RDNA2 in a best case scenario and work back there is enough data out there already for that. All we lack really are the die sizes and CU counts at this point
 
Soldato
Joined
28 Aug 2006
Posts
3,003
I hope AMD BigNavi can compete against the rtx 3080 in 4k @ 60fps. I don't really care much for ray tracing, nothing against those who do. I'd just like a card that can handle higher game settings at 4k 60fps.
So, I'm prepared to wait and see what 6900 brings to the table. AMD have the better fab and node than Nvidia's rtx 3000 series.
 
Soldato
Joined
21 Jul 2005
Posts
20,018
Location
Officially least sunny location -Ronskistats
I hope AMD BigNavi can compete against the rtx 3080 in 4k @ 60fps. I don't really care much for ray tracing, nothing against those who do. I'd just like a card that can handle higher game settings at 4k 60fps.
So, I'm prepared to wait and see what 6900 brings to the table. AMD have the better fab and node than Nvidia's rtx 3000 series.

I think this is a given tbh. If they dont do what you have mentioned I will sack them off and get a PS5. Even if rdna2 only half delivers on the promises, its still going to be a better card than the 5700XT by 25% minimum (which is 2080Ti). This wont happen as they cant charge enough money for a dead horse.

Expect a 3080 contender and a 3070 option. They may 'announce' a halo card if they are confident something meaty can take the crown. The regular line up is going to be released around the consoles.
 
Soldato
Joined
6 Jan 2013
Posts
21,843
Location
Rollergirl
Im going to hold off for AMD. Another month or so which is frustrating but will wait after the NV farce. AMD really need to get these chips baking and have oodles of stock. They could have a win here. Play it right AMD

After that **"* show yesterday, I pre-ordered a PS5. I'm happy to see what AMD have to offer, but I certainly won't be spamming F5 or paying £200 over MSRP to pre-order like I was some sort of crack head.

I moved from Intel to AMD because they brought a cracking product to market, at the right price and at the right time. If ever they had an opportunity to seize the day in the GPU space, it's now.

Incidentally, I'd like to avoid the price gougers but not sure we can buy direct from AMD like we can from NV, can we?
 
Soldato
Joined
16 Nov 2003
Posts
5,456
I dont think we will hear much from AMD until the RTX 3070 and 3060 are launched. They are stalling / waiting at the moment I think.

They say they have their own schedule, I dont think that reflects what we are seeing, they are reacting to Nvidia mostly (and preparing for the console launches too).

Well they don't really need to do much at the moment as Nvidia seem to be doing their best to sabotage themselves and their products...

As above, Nvidia have shown its not a good idea to move your schedule forward. Although i can only assume this was the reason for this botched launch. If this was the plan then i have no words! Haha!
 
Soldato
Joined
8 Nov 2006
Posts
22,979
Location
London
AMD have no intention to react to Nvidia, it was clear before Ampere AMD had no intention to get in before or around the same time as Nvidia and they are still in no rush, just saying "we are on our own release schedule"

I don't know what to make of that, since the Ampere kitchen reveal the 80 CU Navi seems to be no more.

The 2080TI is 35% faster than a 5700XT at 1440P, i'm not going to use 4K for this given that every 8GB card falls further behind the 2080TI, they are not 4K cards.
But at 4K compared to the 3080 it is 32% faster than the 2080TI.

2080TI vs 5700XT 135%
3080 vs 2080TI = 132% (@4K)

Not so Big Navi is rumoured to have 60 CU's, that's 50% more than the 5700XT, after scaling make that +45%

5700XT +45% = 145%

145 / 135 = 1.074. so normalized: 60 CU Not so Big Navi vs 2080TI = 107%

3080 vs 60 CU Not so Big Navi: 132 / 107 = 1.23 (+23%)

RDNA1 vs RDNA2 +10% IPC = 60 CU RDNA2 @1900Mhz. 3080 vs 60 CU RDNA2 @ 1900Mhz: 132 / 117 = 1.128 (+13%)

The PS5 clocks to 2.23Ghz, let's assume that's the limit of RDNA2: 2230 / 1900 = 1.173 (+17%)

Now the 6800XT is 34% faster than a 2080TI, 2% faster than the 3080.

Moores Law thinks AMD think they can compete with the 3080 without the 80 CU Big Navi. i think he's right. With the enhancements of the improved 7nm node and improvements to the RDNA architecture AMD could make a still small 350mm2 GPU at around <2.3Ghz, 250 Watts, 16GB GDDR6 and at least trade blows with the 3080 at £150 less and still get good margins.

I think its a shame if AMD don't slam an 80 CU RDNA2 GPU in Nvidia's face anyway but i think AMD were expecting more from Ampere, they are not impressed.

https://www.techspot.com/review/2099-geforce-rtx-3080/

2080 Ti is ~45% better than the 5700 XT at 1440p. Not 35%.

https://www.3dmark.com/compare/spy/13929556/spy/13515205

That gap increases to 55% at 4K.

https://www.3dmark.com/compare/spy/13923063/spy/13908297

Even TPU have the same results as me across their game suite. Although their results aren't overclocked for both like mine.

https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html

All this napkin maths built on an incorrect starting point.
 
Last edited:
Soldato
Joined
12 Mar 2006
Posts
22,990
Location
N.E England
Keeping a close eye on this channel

I love the look of the 6900XT also it looks beefy. Been so long since I had an AMD GPU. My panel is GSync though but if it comes in cheap enough with loads of stock could be worth it

 
Status
Not open for further replies.
Back
Top Bottom