• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Permabanned
Joined
22 Jul 2020
Posts
2,898
Well FreeSync was basically what Laptops used for years but not called that, I have used all and prefer the Hardware Module, this Odyssey G7 is a buggy POS (G-Sync Comp).
 
Soldato
Joined
21 Jul 2005
Posts
20,021
Location
Officially least sunny location -Ronskistats
IMO its an overcomplicated specialized and expensive way of doing something, its typical Nvidia, It was the same with PhysX, Bullet and Havoc did the same thing only much more efficiently and didn't require specialized instructions, Same with GameWorks, RTX is a sledgehammer to something that's been done more efficiently for more than a decade, again to run it the way Nvidia have it you need specialized acceleration hardware, G-Sync.... you don't need a dedicated £150 chunk of complex hardware to do that.

Its almost as if Nvidia take what is almost always an existing technology, bastardizes it and turns it into something far more complex and resource expensive than it needs to be and then hope this frightens any competitor off.

And then they wonder why more often than not someone like AMD comes along and makes the whole thing more efficient.

This is the point we are highlighting, and only nvidia seem to be able to get championed for it. If other companies tried it they would soon find sales disappear as most people out there like 'free' stuff not more proprietary locked down nonsense.
 
Soldato
Joined
14 Aug 2009
Posts
2,764
AMD already have something similar.

Native 4K left, AMD image Sharpening Middle, NVidia original blurry mess DLSS right.

Just in case your blind.... the best Image quality is the AMD one in the middle.

Yes i'm going to bring this up everytime someone makes a daft "But DLSS" blurt"

LvqGzmE.png

That's DLSS 1.0 and if I didn't knew better, I'd say you're purposely misleading. :D

I have to wonder why is it that other developers devalue DLSS though?

For the same reasons devs did not bother with a lot of other tech (including Mantle, async compute, implicit primitive shaders, True Audio and other fancy stuff ): (1) they're lazy, (2) requires extra resources or (3) all of them.

IMO its an overcomplicated specialized and expensive way of doing something, its typical Nvidia, It was the same with PhysX, Bullet and Havoc did the same thing only much more efficiently

As long as it offers performance for the end users, doesn't really matter how they're doing it. DLSS2.0 offers better performance and image quality than AMD.

Bullet and Havoc didn't do the advanced stuff (fluid simulation and such), if I'm no mistaken, they're only doing more in specialized apps, not games. FEMFX I think is the latest of these "etherical" techs. AMD bother little to help developers implement their own tech and it shows.
 
Caporegime
Joined
17 Mar 2012
Posts
47,579
Location
ARC-L1, Stanton System
Bullet and Havoc didn't do the advanced stuff (fluid simulation and such), if I'm no mistaken, they're only doing more in specialized apps, not games. FEMFX I think is the latest of these "etherical" techs. AMD bother little to help developers implement their own tech and it shows.

WHAT????

Cryengine... Bullet.










Blender... Bullet


 
Last edited:
Soldato
Joined
31 Oct 2002
Posts
9,860
Same... i just get whatever gives me the most FPS per £.

Yeah same here. If more people did this, we'd have a much more competitive GPU market, where AMD would have had much more R&D budget years ago. We'd all be winners with fairer prices and more performance.

That said, just look at this thread. Big Navi thread, and still we have those who openly admit to being financially dependant on Nvidia's success (heavily invested in stocks, or working for a Nvidia partner) who make it their life's mission to downplay AMD and up-talk Nvidia.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
I am not emotionally attached to either company. I will pick the card which offers the best performance per price. DLSS 2.0 looks fantastic to me however.
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic - we'll agree to disagree on that one - but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware yet inflate their prices further?
 
Associate
Joined
12 Jul 2020
Posts
288
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic, but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware and inflate their prices further?
To be honest I wasn't aware of this. I was merely under the impression that this was an additional tech they developed. I don't follow GPU news too much!
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
To be honest I wasn't aware of this. I was merely under the impression that this was an additional tech they developed. I don't follow GPU news too much!
Well thats a big ol can of worms you need to open then for when you come to make your purchase :p

Edit: I retract the Kool-Aid comment then if you weren't aware of the details. Marketing sucker instead? :p (I jest, I jest)
 
Associate
Joined
14 Aug 2017
Posts
1,195
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic - we'll agree to disagree on that one - but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware yet inflate their prices further?


If the image quality and frame rate are there, and beating the competition, does it really matter to you how that's achieved?
Why should it?
Is all optimisation "cheating"?
 
Status
Not open for further replies.
Back
Top Bottom