• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD DLSS Equivilent in Spring.

Associate
Joined
8 Oct 2020
Posts
2,324
This thread is about AMD's competitor to DLSS. It doesn't matter who developed the underlying technology.

You should probably read the thread again. It’s a competitor to DLSS, built by Microsoft and Nvidia, that AMD cards can use, but they didn’t build it.

Definitely won’t work as well on AMD cards as it will on Nvidia until they package the necessary hardware.
 
Caporegime
Joined
8 Jul 2003
Posts
30,062
Location
In a house
You should probably read the thread again. It’s a competitor to DLSS, built by Microsoft and Nvidia, that AMD cards can use, but they didn’t build it.

Definitely won’t work as well on AMD cards as it will on Nvidia until they package the necessary hardware.

AMDs hardware for it will be those ML chiplets, which are probably coming on their next cards.
 
Associate
Joined
20 Nov 2020
Posts
1,120
You should probably read the thread again. It’s a competitor to DLSS, built by Microsoft and Nvidia, that AMD cards can use, but they didn’t build it.

Definitely won’t work as well on AMD cards as it will on Nvidia until they package the necessary hardware.
It is not about not working well. Again the dedicated hardware is not doing miracles it is some added compute units that only do this type of maths used for upscaling and denoising.
AMD will use some of the cores they already have and by doing this they will lose their performance advantage they have at 1080p or 1440p. We can expect that a 6800 performance to be equal to the 3070 at 4k upscaled.
You can say that some of their rendering cores will turn into "dedicated hardware" when upscaling is needed. The same will happen on Xbox. It was a good decision since you can only put a certain number of transistors inside the chip.
 
Soldato
Joined
28 May 2007
Posts
10,066
You should probably read the thread again. It’s a competitor to DLSS, built by Microsoft and Nvidia, that AMD cards can use, but they didn’t build it.

Definitely won’t work as well on AMD cards as it will on Nvidia until they package the necessary hardware.

Did you actually read the op and the article linked. The thread is about AMD Fidelityfx Super resolution which leverages Direct ML. The article states it should run on Nvidia cards as well but states it should run faster on 6 series cards due to the infinity cache.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Did you actually read the op and the article linked. The thread is about AMD Fidelityfx Super resolution which leverages Direct ML. The article states it should run on Nvidia cards as well but states it should run faster on 6 series cards due to the infinity cache.
That is a speculation i can almost bet that a game will have more fps running DirectML upscaling on 3080 than it will have on the 6800xt. At least 5 to 10% more fps.
 
Caporegime
Joined
4 Jun 2009
Posts
31,035
Everything about real-time graphics is about clever tricks to get the most out of whatever hardware is available. As long as it's not humanly detectable, it's a legitimate performance-enhancing technique, which leaves more resources available for faster framerates and improved graphics.

Otherwise you might as well complain about techniques like polygon culling. Sure, there's no need to draw the polygons you can't see, but surely it's a compromise that goes against the highest computing and gaming experiences? In fact, not wasting computing cycles on things you don't need, or that a person can't detect is exactly all about freeing up resources to make the gaming experience better in the form of higher performance or improved graphical fidelity where it matters the most.

DLSS-type upscaling is one of a long line of compromises (an optional one at that), but one that is worth making for the overall benefit of the experience, and the really clever compromises are invisible to the player. If you didn't have technologies like this, you'd be forced to perform other compromises such as lowering resolution, graphical fidelity or accepting lower framerates in order to get a playable performance, and those are much more detectable compromises.

High quality upscaling is just another optional tool in the game player's arsenal to get the highest experience out of the hardware available. It's a pretty good tool that has a big impact, and has progressed to being undetectable by people, so it's no wonder it's being pushed as a good thing. In lieu of several magnitudes more processing power in your GPU (which developers will always seek to use up anyway), it's a good optimising technique that shortcuts us to getting better results than the hardware can otherwise provide.
+1

It will allow for better graphics on the whole but only if developers continue to optimise their games and don't see this as a way to spend less time optimising PC versions.
 
Soldato
Joined
28 May 2007
Posts
10,066
That is a speculation i can almost bet that a game will have more fps running DirectML upscaling on 3080 than it will have on the 6800xt. At least 5 to 10% more fps.

You might be wrong or you might be right I will wait until its released as I have no idea myself so wont be speculating. Was just stating what the article said and assuming they know how the tech will work. It is software developed by AMD so hard to believe it wont work well with there hardware. It does need Direct ML Api which I agree Nvidia hardware should be suited well on that front.
 
Associate
Joined
20 Nov 2020
Posts
1,120
You might be wrong or you might be right I will wait until its released as I have no idea myself so wont be speculating. Was just stating what the article said and assuming they know how the tech will work. It is software developed by AMD so hard to believe it wont work well with there hardware. It does need Direct ML Api which I agree Nvidia hardware should be suited well on that front.
I said the 3080 will do it better because a card like 6800xt will run the algorithms on some of their compute cores. So let's say they are using 80 cores now and they are idk 6% ahead of 3080 at 1080p. Then let's say they will use 8 cores for upscaling, so the game will only be rendered by 72 cores instead of 80 and there you will see some performance drop.
So when we think about upscaling and how good Radeon will be we should look at 1080p (and 1440p) performance vs the 3080 and then understand that they will lose some of that peformance by using less compute units for rendering when the game needs upscaling. So if they were 6% ahead with 80 cores, they may be 6% behind with 72 cores, at 4k upscaled.
The 3080 does not have the same problem.

Edit: Ok it's not 80CU but 72 and 64 when upscaling but this is my bet on the performance.
 
Last edited:
Associate
Joined
20 Nov 2020
Posts
1,120
Here is a RDNA 2 CU described:
mixed_precision.png


Anyone can see the magical word tensor in there? What does that mean? That when needed, the CU will run the tensor math exactly as the tensor cores are doing it on Nvidia. So there you have your dedicated hardware.
 
Soldato
Joined
4 Feb 2006
Posts
3,203
Here is a RDNA 2 CU described:
mixed_precision.png


Anyone can see the magical word tensor in there? What does that mean? That when needed, the CU will run the tensor math exactly as the tensor cores are doing it on Nvidia. So there you have your dedicated hardware.


Nvidia has brainwashed people into thinking machine learning is only possible with 'Tensor cores' when the reality is that a Tensor is a mathmatical object dealing with Vectors and Scalars and can be calculated using any processing unit. Sure the Tensor cores may be optimized to handle the calculations efficiently but the way AMD compute units work it is entirely possible to do the same thing without dedicated 'Tensor' cores.
 
Associate
Joined
12 Jan 2021
Posts
1,296
DirectX 12 ray tracing is software developed by Microsoft to work on their Xbox and Windows 10 platforms. AMD provides the CPU/GPU for the Xbox, so it would be logical that Microsoft work with AMD to develop said hardware. RDNA2 does contain ray tracing hardware to increase raytracing performance. AMD has even done tests to see how much impact this hardware has over a software based solution.

https://youtu.be/3G25-2nu1go?t=160
 
Associate
Joined
8 Oct 2020
Posts
2,324
AMDs hardware for it will be those ML chiplets, which are probably coming on their next cards.

Are they making their own ML chips? Would make sense if they invest in that area as it's currently NVIDIA dominated.

It is not about not working well. Again the dedicated hardware is not doing miracles it is some added compute units that only do this type of maths used for upscaling and denoising.
AMD will use some of the cores they already have and by doing this they will lose their performance advantage they have at 1080p or 1440p. We can expect that a 6800 performance to be equal to the 3070 at 4k upscaled.
You can say that some of their rendering cores will turn into "dedicated hardware" when upscaling is needed. The same will happen on Xbox. It was a good decision since you can only put a certain number of transistors inside the chip.

Yep, that's why I said "as well as" because they're going to have to juggle cores based on what they feel makes sense; might be an interesting challenge but maybe they'll get some dedicated hardware with RDNA3. At least older cards will be able to benefit from this, although it's probably going to be a year before we see it become mainstream. The more games that support it the better; will help make these rushed to market games playable and give consoles extra longevity. Reviews based solely on FPS will start meaning very little; wonder whether they'll bother comparing image quality.

Did you actually read the op and the article linked. The thread is about AMD Fidelityfx Super resolution which leverages Direct ML. The article states it should run on Nvidia cards as well but states it should run faster on 6 series cards due to the infinity cache.

AMD aren't really doing much other than slapping a shinny name on top of it; they'll update their drivers to tell it how to manage the resources when it's enabled, similar to the SAM/RBAR situation. It's annoying but at least it's not another proprietary solution.
 
Associate
Joined
12 Jan 2021
Posts
1,296
Are they making their own ML chips? Would make sense if they invest in that area as it's currently NVIDIA dominated.


AMD aren't really doing much other than slapping a shinny name on top of it; they'll update their drivers to tell it how to manage the resources when it's enabled, similar to the SAM/RBAR situation. It's annoying but at least it's not another proprietary solution.

I understand that DirectX ray tracing is proprietary to Microsoft. It will work on AMD RNDA2 gpu in Xbox and Windows10, but not on AMD CPU/GPU on Playstation 5, it will be interesting to see what Sony will come up with for PS5
 
Soldato
Joined
18 Oct 2002
Posts
7,027
Location
Melksham
Is anyone else not a bit 'concerned' about what this means for gaming?

There's two aspects that we've seen with DLSS...
Firstly is that it's quite clearly not 'better than native', it's not even 'as good as' native. At 4k DLSS 'Quality' does come impressively close to native, but it's close, not as good as, which you'd kinda expect from an upscaled 1440p image (iirc). Settings below Quality, and resolutions lower than 4k it just gets worse.

Secondly, and more of an issue, is the usage of it. By that I mean if DLSS was a crutch so say a 2060 could get reasonable framerates in a new game at 1440p then fine, all good. But instead we're seeing games where DLSS is 'needed' to get decent framerates at 1440p with an 3080, and that's before turning on RT... Let alone 4k or anything...

Now I suspect the latter is due to nvidia pushing the developers to make that happen, because then you get everyone comparing purely numbers of 'better than native' DLSS against AMD's native which looks good for nvidia and brings calls of 'When will AMD get DLSS'. So hopefully if both parties have upscaling tech then that might lessen and people/reviewers might focus on actual image quality/graphics/game details rather than these flawed comparisons... Maybe... Ah, who am I kidding...
 
Associate
Joined
20 Nov 2020
Posts
1,120
Is anyone else not a bit 'concerned' about what this means for gaming?

There's two aspects that we've seen with DLSS...
Firstly is that it's quite clearly not 'better than native', it's not even 'as good as' native. At 4k DLSS 'Quality' does come impressively close to native, but it's close, not as good as, which you'd kinda expect from an upscaled 1440p image (iirc). Settings below Quality, and resolutions lower than 4k it just gets worse.

Secondly, and more of an issue, is the usage of it. By that I mean if DLSS was a crutch so say a 2060 could get reasonable framerates in a new game at 1440p then fine, all good. But instead we're seeing games where DLSS is 'needed' to get decent framerates at 1440p with an 3080, and that's before turning on RT... Let alone 4k or anything...

Now I suspect the latter is due to nvidia pushing the developers to make that happen, because then you get everyone comparing purely numbers of 'better than native' DLSS against AMD's native which looks good for nvidia and brings calls of 'When will AMD get DLSS'. So hopefully if both parties have upscaling tech then that might lessen and people/reviewers might focus on actual image quality/graphics/game details rather than these flawed comparisons... Maybe... Ah, who am I kidding...
Yeah we should be concerned about native resolution performance. Such a technology can be good for consoles or older cards but when you need it on your very expensive new card, it is not a good thing.
 
Soldato
Joined
18 Feb 2015
Posts
6,484
Sounds good to me! Excited to see it, because right now I'm a bit deflated with AMD. Hopefully there will be an effort to backport it also, I need them to do this for Cyberpunk and turn on RT as well while we're at it. Get involved AMD!

Another really exciting aspect of this, for me, is the potential for handheld devices which will have Rembrandt APUs (RDNA 2 GPU). It's going to be so great for battery life & visuals!
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
Well, look at the state of DLSS back then.

LvqGzmE.png


Err?! Very ugly image quality reduction on the far right.

Everything about real-time graphics is about clever tricks to get the most out of whatever hardware is available. As long as it's not humanly detectable, it's a legitimate performance-enhancing technique, which leaves more resources available for faster framerates and improved graphics.

Otherwise you might as well complain about techniques like polygon culling. Sure, there's no need to draw the polygons you can't see, but surely it's a compromise that goes against the highest computing and gaming experiences? In fact, not wasting computing cycles on things you don't need, or that a person can't detect is exactly all about freeing up resources to make the gaming experience better in the form of higher performance or improved graphical fidelity where it matters the most.

DLSS-type upscaling is one of a long line of compromises (an optional one at that), but one that is worth making for the overall benefit of the experience, and the really clever compromises are invisible to the player. If you didn't have technologies like this, you'd be forced to perform other compromises such as lowering resolution, graphical fidelity or accepting lower framerates in order to get a playable performance, and those are much more detectable compromises.

High quality upscaling is just another optional tool in the game player's arsenal to get the highest experience out of the hardware available. It's a pretty good tool that has a big impact, and has progressed to being undetectable by people, so it's no wonder it's being pushed as a good thing. In lieu of several magnitudes more processing power in your GPU (which developers will always seek to use up anyway), it's a good optimising technique that shortcuts us to getting better results than the hardware can otherwise provide.

That's exactly what DLSS does - AMD does not need any upscaling unless it magically delivers the native resolution's image quality :D
 
Back
Top Bottom