• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

nVidia GT300 - GeForce GTX 380 yields are sub-30%

Associate
Joined
28 Apr 2007
Posts
490
Firstly, the card had AA hardware, which couldn't be used without dx10.1(the original spec) being used in games, so it had to shift AA from hardware, into software done in programmable shaders, to say DX10 wasn't responsible for that is to simply not know what you're talking about.

While AA performance in DX10 titles was affected (although it is hard to judge since we don't exactly have a huge amount of 10.1 games to compare to, which you can blame on nVidias irritating TWIMTBP program), it was a pretty huge oversight by ATi since pretty much all the games at the time were DX9, where the AA would have to be done in the shaders. This lead to worse performance in last gen games (by a long way), and lets face it, when a new gen of cards come out for an upcoming DX version, last gen performance is arguably just as (if not more) important than how it will perform.

To say it was unbalanced is again incorrect. it WOULDN'T have had more shaders without the ring bus in, it had the right amount of shaders and everything else it needed, its still a VERY good card, in dx10.1 it beats the 8800gtx quite easily

My main reason for stating that it was unbalanced was due to the 512bit memory interface which provided 105gb/s~ and took up a huge amount of die area while giving virtually no performance increase over what could be achieved on a 256bit bus (as proved by the 3870 which had a 256bit bus with GDDR3 and 60-70gb/s of bandwidth, while only being a couple of % slower).

Also the ring bus did add performance, however it didn't add enough performance relative to the die space it took up (it was mainly a marketing tool really). This was also of the major reasons why in the 38XX to 48XX transition-when the ring bus was removed-we were able to see the increase in shaders and TMUs that we saw, even though it was on the same manufacturing node and the die only increased from 190mm2~ to 256mm2~.

its still a VERY good card, in dx10.1 it beats the 8800gtx quite easily

The HD2900 never had DX10.1, so I cannot see how you are even making this comparison. The HD2900 was only capable of DX10 and DX10.1 support was introduced with the arrival of the HD3XXX series.
 
Soldato
Joined
25 Jun 2009
Posts
7,733
these problems are either a load of bull, or they could be waiting ON PURPOSE to see what the 5870 does, performance wise....

because if i was Nvidia; it makes huge sense not to release anything, until you know the performance of the 5870, especially the 5870 x2........just in case, my card isn't powerful enough.

my guess is, the GTX 300 is either totally ****** up, or they're waiting on purpose....and it's already up and running, but only needs fine tuning if needed!
 
Soldato
Joined
17 Aug 2003
Posts
20,158
Location
Woburn Sand Dunes
Firstly, the card had AA hardware, which couldn't be used without dx10.1(the original spec) being used in games, so it had to shift AA from hardware, into software done in programmable shaders, to say DX10 wasn't responsible for that is to simply not know what you're talking about. To say it was unbalanced is again incorrect.

So the R600 couldn't do hardware AA in anything bar dx10.1, and that's microsofts fault?

what was the actual reason Drunkenmaster? what stopped the r600 core from doing hardware AA?

drunkenmaster said:
Hardly a bad card, as said dx10 hachet job screwed the card pretty badly, added to TSMC screwing up 65nm(as they screwed the pooch on 55, and now 40nm and probably previous processes in the past just before I was reading news about it), its a fantastic design in everything except its size/price , the implementation with both manufacturing and Nvidia/MS/dx10 against it meant it would never be as good as it should have been, considering what it was designed for and how it had to fight in the end, the fact it was so close to the 8800gtx in so many games and beat it in any games is a testament to how good a design it was.

not with aa it wasnt, infact it was closer to the 8800gts and it consumed more power than a gtx as well.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,313
I'm assuming... tho I've not actually researched it, ATI intended with the 2900 that "AA" would be applied as part of the deferred shader pipeline using an edge detection filter... rather odd approach - if thats the case I can only assume they mis-understood where MS were going with DX10.1 and thought it could be used to overload older functions to.
 
Soldato
Joined
17 Aug 2003
Posts
20,158
Location
Woburn Sand Dunes
I'm assuming... tho I've not actually researched it, ATI intended with the 2900 that "AA" would be applied as part of the deferred shader pipeline using an edge detection filter... rather odd approach - if thats the case I can only assume they mis-understood where MS were going with DX10.1 and thought it could be used to overload older functions to.

pretty much.

anandtech said:
CFAA and No Fixed Resolve Hardware

That's right, R600 doesn't have hardware dedicated to resolving MSAA in the render back end - the only MSAA related tasks handled in the render back end are compression and evaluation of the subpixels. All antialiasing resolve is performed on the shader hardware. Certainly, AMD would prefer we start by telling you about the neat custom resolve filters that can be implemented on their shader hardware, but we would rather speculate about this for a moment first.

AMD has stated that, moving forward, in addition to allowing programmable sample patterns, future DX versions may allow for custom resolve filters as well. This is cited as one of the reasons why R600 uses shader hardware to resolve AA samples. AMD has given us a couple different resolve filters to play with, which we'll talk about in a minute. But at a point where we're seeing the first DX10 hardware from graphics makers, and at a time where competitive performance is paramount, it doesn't seem like the decision we would have made.

Whatever the circumstances, R600 sends its pixels back up from the render back ends to the shader hardware to combine subpixel data into a final pixel color. In addition to the traditional "box" filter (which uses subpixels within the area of a single pixel), the new driver offers the ability to use subpixel data from neighboring pixels resolved with a tent filter (where the impact of the subpixels on final color is weighted by distance). AMD calls this CFAA for custom filter antialiasing.

http://www.anandtech.com/video/showdoc.aspx?i=2988&p=11
 
Associate
Joined
16 Sep 2009
Posts
32
if this card ever drops to 500 quid then i'm getting it........sod DX11, it's not needed.....you can barely tell the difference, plus i doubt many future games will be DX11............will it drop to 500 quid ?................never :D:D:D:D


i think it should drop to 500 easily.
im waiting for this :)
 
Soldato
Joined
30 May 2009
Posts
4,620
Location
Maidenhead

Apart from that stupid limited edition thing yes =p

But still. For a new release card, Nvidia wouldn't try and even sell a £500 card to a LOT of people looking for a new graphics card.

That card is £1000 because it's a limited edition, and it's so bloody stupid. Asus are simply trying to make a lot of money out of some people who want a limited edition card.
 
Permabanned
Joined
29 Feb 2004
Posts
1,922
Location
fife, Scotland
While nVidia did get sections of the DX10 spec changed, to blame the poor performance of the R600 on that is a bit misguided. The R600 was just a badly balanced design, with things such as huge amounts of the die being taken up by the ring bus and 512 bit memory interface, leaving no room for more shaders, TMUs etc.

That alongside poor AA execution, the dire 80nm manufacturing process and teething problems with the new architecture (not to mention the very recent takeover of ATi by AMD at the time), was really what made the R600 a bit of a flop.

Thats why I said partly, not completely. After driver updates R600 wasnt really that bad either. Still below the 8800GTX but not that bad.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,313
Think they are underestimating the TFLOPs value... my calculations put it at ~3.8TFLOP for single precision on the 512 SP part.
 
Associate
Joined
14 Jan 2008
Posts
2,057
Location
UK
Wondered how long it would be before we had a leaked gt3xx spec and while on the face of it it looks nice i am not holding my breath that it will be delivered.
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Think they are underestimating the TFLOPs value... my calculations put it at ~3.8TFLOP for single precision on the 512 SP part.

That depends entirely on the core and shader clockspeeds.

Anyway, reading that little article does not fill me with confidence. It's very vague, and seems to be written by someone who doesn't fully understand what they're writing about. Of course it could still be genuine, but I doubt it.
 
Back
Top Bottom