Soldato
Larabee will be a highly programmable, x86 compatible, GPGPU monster. It won't be a gaming card.
Mark my words.
Mark my words.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Firstly, the card had AA hardware, which couldn't be used without dx10.1(the original spec) being used in games, so it had to shift AA from hardware, into software done in programmable shaders, to say DX10 wasn't responsible for that is to simply not know what you're talking about.
To say it was unbalanced is again incorrect. it WOULDN'T have had more shaders without the ring bus in, it had the right amount of shaders and everything else it needed, its still a VERY good card, in dx10.1 it beats the 8800gtx quite easily
its still a VERY good card, in dx10.1 it beats the 8800gtx quite easily
I bet they'll be £700 for a card. Ridiculous.
Don't be stupid. No one would ever buy the card. That's hardly good business sense to sell a card for £700. Not even Nvidia would be that stupid really.
Your right.. that would be stupid. http://www.overclockers.co.uk/showproduct.php?prodid=GX-203-AS&groupid=701&catid=56&subcat=1324
Firstly, the card had AA hardware, which couldn't be used without dx10.1(the original spec) being used in games, so it had to shift AA from hardware, into software done in programmable shaders, to say DX10 wasn't responsible for that is to simply not know what you're talking about. To say it was unbalanced is again incorrect.
drunkenmaster said:Hardly a bad card, as said dx10 hachet job screwed the card pretty badly, added to TSMC screwing up 65nm(as they screwed the pooch on 55, and now 40nm and probably previous processes in the past just before I was reading news about it), its a fantastic design in everything except its size/price , the implementation with both manufacturing and Nvidia/MS/dx10 against it meant it would never be as good as it should have been, considering what it was designed for and how it had to fight in the end, the fact it was so close to the 8800gtx in so many games and beat it in any games is a testament to how good a design it was.
I'm assuming... tho I've not actually researched it, ATI intended with the 2900 that "AA" would be applied as part of the deferred shader pipeline using an edge detection filter... rather odd approach - if thats the case I can only assume they mis-understood where MS were going with DX10.1 and thought it could be used to overload older functions to.
anandtech said:CFAA and No Fixed Resolve Hardware
That's right, R600 doesn't have hardware dedicated to resolving MSAA in the render back end - the only MSAA related tasks handled in the render back end are compression and evaluation of the subpixels. All antialiasing resolve is performed on the shader hardware. Certainly, AMD would prefer we start by telling you about the neat custom resolve filters that can be implemented on their shader hardware, but we would rather speculate about this for a moment first.
AMD has stated that, moving forward, in addition to allowing programmable sample patterns, future DX versions may allow for custom resolve filters as well. This is cited as one of the reasons why R600 uses shader hardware to resolve AA samples. AMD has given us a couple different resolve filters to play with, which we'll talk about in a minute. But at a point where we're seeing the first DX10 hardware from graphics makers, and at a time where competitive performance is paramount, it doesn't seem like the decision we would have made.
Whatever the circumstances, R600 sends its pixels back up from the render back ends to the shader hardware to combine subpixel data into a final pixel color. In addition to the traditional "box" filter (which uses subpixels within the area of a single pixel), the new driver offers the ability to use subpixel data from neighboring pixels resolved with a tent filter (where the impact of the subpixels on final color is weighted by distance). AMD calls this CFAA for custom filter antialiasing.
if this card ever drops to 500 quid then i'm getting it........sod DX11, it's not needed.....you can barely tell the difference, plus i doubt many future games will be DX11............will it drop to 500 quid ?................never
Your right.. that would be stupid. http://www.overclockers.co.uk/showproduct.php?prodid=GX-203-AS&groupid=701&catid=56&subcat=1324
While nVidia did get sections of the DX10 spec changed, to blame the poor performance of the R600 on that is a bit misguided. The R600 was just a badly balanced design, with things such as huge amounts of the die being taken up by the ring bus and 512 bit memory interface, leaving no room for more shaders, TMUs etc.
That alongside poor AA execution, the dire 80nm manufacturing process and teething problems with the new architecture (not to mention the very recent takeover of ATi by AMD at the time), was really what made the R600 a bit of a flop.
Think they are underestimating the TFLOPs value... my calculations put it at ~3.8TFLOP for single precision on the 512 SP part.
That depends entirely on the core and shader clockspeeds.