• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Have Graphics Card Reached The Technological Brickwall ?

Associate
Joined
17 Sep 2018
Posts
1,431
FTFY.

I think graphics are now at a point where there’s very little more developers and publishers can do as well as hardware manufacturers. Games have conceivably lifelike graphics and cards that can push 4K @60FPS in most titles are out there now. This causes issues for Nvidia and AMD who rely on PC gamers to upgrade regularly for more FPS and higher detail. RTX is a gamble on NV’s part to try and get people to upgrade from 10x0 series when most of these cards still work and work well at higher resolutions and FPS.

Isn't this in large part because most games are designed to run on the latest playstation/xbox, so at the moment that equates to an RX 570. Once the PS5 hits games will be made to run on that hardware. So perhaps geared to 1080/Vega 64 Ryzen 3600 level

It's not like in the past where you had teams create something so advanced that PCs would struggle to run it like Crysis. So you can partly blame the industry.
 
Soldato
Joined
5 Sep 2009
Posts
2,584
Location
God's own country
I remember the days of 1999, where there was all this excitement for bio nano computers, real living organisms being able to run and process calculations. It was all quite bazaar back then, but it was a prediction of what we may have been using now...

About that time, I built my first PC, Athlon Slot A 800Mhz it was like a bloody video cassette in size and build quality. And a Rage Fury Maxx which I bragged about all the time because bigger was better... obviously. Long time ago, and I think a Thunderbird 1000 after that followed by a San Diego 3700+, followed by a C2D or two, followed by Q9650 then Sandybridge.
The difference was, they were quantum leaps every time in those days. It seems now every new generation brings very little, not too much to get excited about. Lots of hype but little reason to upgrade.
 
Soldato
Joined
18 Feb 2015
Posts
6,484
We're in a bit of a lull but I think more than anything people take things for granted. There's been all sorts of advancements in all areas and when put together they add up a great deal. Yes, on an individual basis they are more subtle than going from 2D to 3D but that doesn't make them less significant.

Just compare Tomb Raider at 4K to SotTR at 4K and you'll see. People forget until a side by side shows the differences in stark contrast. That a current card can yield such results with how demanding the games are is just great.

What's even more impressive is that it can even handle 8K competently and we're a year from that even being a thing! (hdmi 2.1, native 8k displays etc) Let alone being a thing for anyone other than the 0.1%.

Overall I would say software is still further behind than hardware right now, simply based on the fact that people's setups aren't even current so they have to focus on a large enough playerbase for the development to be sustainable at all. So in the end you aren't even seeing games for today's hardware until a few years after. In fact, there's very few even 4K games being actually properly done at that resolution, let alone 8K. My favourite & recent example is The Witcher 3, which as great as it is, still required community development to really put it at that level as well (IDD mod, 4K textures, etc). And that's a lucky example because it at least allowed others to do that to it. Otherwise, it's hard to even come up with other examples, mostly because 99.9% of games just gimp LOD to smithereens on account of consoles. So while, to give another example close to my heart, Odyssey looks great in certain areas & angles, as soon as you're looking from the shore to another shore, you're basically staring at impressionist paintings because unless something's staring you in the face, the LOD is awful, and sometimes it has problems even up close! But that's just the usual, almost no games escape this fate (and if they do, it's only if they're moddable/ini tweakable).

So, I wouldn't worry. The Golden Age of 4K gaming isn't even here yet, and already we have GPUs that can handle it with ease (talking about a 60hz scenario, not some even more ultra-enthusiast setups). Which means that by the time we have proper 4K scaling for all these titles, GPUs will handle it with even more ease; and since PS5/X2 aren't coming before mid-late next year, there's even more time for all those GPU developments to happen.

FHD & QHD have basically stopped being a challenge since the Hawaii/Kepler days, and today GPUs handle any title with ease at those resolutions, even the affordable ones (RX 590 etc).

I'm really not worried about GPU development (nor even pricing).
 
Soldato
Joined
25 Sep 2009
Posts
9,627
Location
Billericay, UK
I'm really not worried about GPU development (nor even pricing).
The rate of development has slowed right down though due to a lack competition and the ever increasing length of time we now have to wait between die shrinks. In answer to the OP’s question I wouldn’t be concerned about development in terms of new ideas but shortly we will hit the wall when it comes to just how small we can make electrical gates for electrons to pass through. I tend to think in the late 2020’s and 2030’s we might go through a dark period where computers traditional silicon chips won’t see any significant performance gains like we did in the late 90’s and early 2000’s.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
I tend to think in the late 2020’s and 2030’s we might go through a dark period where computers traditional silicon chips won’t see any significant performance gains like we did in the late 90’s and early 2000’s.
But as developers start embracing multi-threaded design you wouldn't need to continually improve IPC and clock speeds, you could just throw more cores into the mix. Of course the downside there is increasingly large CPU packages, TDP and power requirements so you resolve 1 issue by introducing a boat load of others.
 
Soldato
Joined
18 Feb 2015
Posts
6,484
The rate of development has slowed right down though due to a lack competition and the ever increasing length of time we now have to wait between die shrinks.

Has it though? It doesn't look so dire to me. If anything, there's still plenty of juice in current hardware which simply isn't being utilised. (V64 gains.. over 290X, 290x gains.. over 7970)

dYDnVEN.png
 
Associate
Joined
17 Sep 2018
Posts
1,431
I tend to think in the late 2020’s and 2030’s we might go through a dark period where computers traditional silicon chips won’t see any significant performance gains like we did in the late 90’s and early 2000’s.

Let's say they get to a limit like CPU Clockspeeds becoming stagnent. They'll have to similarly improve multi-GPU software support. But the fact they've completely stepped away from multi-GPUs shows they know there's plenty more they can get out of

And on that point games designed around cross fire should see 2 VEga 56s hold fire with or beat a 2080ti for a fraction of the price. So when single card GPUs run out of juice that could be the future. Or maybe chip stacking is the future. Who knows? Maybe there's a new innovation we have no idea about.
 
Soldato
Joined
27 Feb 2015
Posts
12,616
I would say no.

Its in the interests of tech companies to drip feed improvements from generation to generation, its how you maximise profits.

If you combine the added RT,DLSS with turing improvements, overall it wasnt too bad, but they completely messed up the pricing.

e.g. apple probably could have made whats in the iphone 5 in the iphone 2, but it would have been drip fed.
 
Soldato
Joined
14 Aug 2009
Posts
2,752
FTFY.
Games have conceivably lifelike graphics and cards that can push 4K @60FPS in most titles are out there now.

Depends what you mean by 4k@60fps. By my experience, if you want to never drop below 60fps with all the stuff enabled, then even a 2080ti shouldn't be able to do that with the games so far (or is able, but in simpler games), never mind the ones that should arrive in 1 year +/- time. If you want an average of 60 and don't mind turning down or even off some settings, then yes, is doable with very expensive cards.

Isn't this in large part because most games are designed to run on the latest playstation/xbox, so at the moment that equates to an RX 570. Once the PS5 hits games will be made to run on that hardware. So perhaps geared to 1080/Vega 64 Ryzen 3600 level

It's not like in the past where you had teams create something so advanced that PCs would struggle to run it like Crysis. So you can partly blame the industry.

Exactly. Games are made to give an optimal image quality for console hardware and similar suited PCs. A lot of the settings are eating a into performance without much to show for - basically are there so they can argue "we've made a PC game".

We had a physics accelerator and later on nVIDIA proved it can be done on the GPU. You can even run the game on an AMD card and the physics on a secondary dedicated GPU. Moreover, AMD proved you can have AI accelerated on the GPU even from the 4xxx series, waaaaay back.

How many games are build around low level APIs (DX12, Vulkan), with complex AI and physics, each running from their own dedicated GPU? None. And this goes into CPU and RAM territory as well.

The hardware is way ahead of the software when it comes to games and depending on each case, even to software in general.
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
Not even close to a brick wall - but the problem is the upheaval in terms of architecture and software support needed for the next leap - in the long run ray tracing and similar techniques will overtake traditional rasterisation and enable a level of parallelisation with graphics processing through the whole pipeline that simply isn't possible with today's approaches.

As LePhuronn put it we've hit a period of stagnation for various reasons rather than a brick wall.

EDIT: We have also reached a place where current semiconductor nodes are about maxed out but the next generation of nodes aren't quite ready but ultimately there is a limit with most architectures how much you can iteratively just add on more without hitting diminishing returns due to the type of data you are processing so clockspeed is still king as well.
For once I'm not sure I agree with you :p

Don't we need something like 2,000 x the power of the 2080 Ti to do real-time ray-tracing? That's something I recall having read.

We're approaching the limits of silicon already. Not only that but we're reaching breaking point in terms of investment capital needed to progress foundry tech. Fewer players and spiralling costs with each new jump.

So it's not hard to see how, whilst in absolute terms we might not have hit a hard limit, we have hit a plateau that may last years (with small increments only in that time).
 
Man of Honour
Joined
13 Oct 2006
Posts
91,045
For once I'm not sure I agree with you :p

Don't we need something like 2,000 x the power of the 2080 Ti to do real-time ray-tracing? That's something I recall having read.

We're approaching the limits of silicon already. Not only that but we're reaching breaking point in terms of investment capital needed to progress foundry tech. Fewer players and spiralling costs with each new jump.

So it's not hard to see how, whilst in absolute terms we might not have hit a hard limit, we have hit a plateau that may last years (with small increments only in that time).

For full scene, full screen ray tracing sure we need a lot more power - but as it takes off enhancements and other techniques to do some of the processing faster will be found and it is possible to use a hybrid approach which will still benefit hugely from the ability to parallel a lot of the tracing and other maths - a lot of the "shortcuts" revolve around sparse voxel sets, etc. which use a similar type of processing.

Just one technique alone for an alternative approach to the first pass allows several thousand time speed ups (I can't remember exact numbers but it was something like 5000ms for just 320x240 to 5-6ms at 512x384) so if more advances like that are found we can definitely make some breakthroughs.
 
Back
Top Bottom