Caporegime
- Joined
- 18 Oct 2002
- Posts
- 33,188
I know quite a few users here that bought the HD2900 and defended it, I'm not going to name names, they know who they are. I think the 3870X2 was one of the most popular cards for a while even though it wasnt really much faster then nvidias single GPU offerings.
But at least they made up for it with the 4 series and hopefully again with the 5 series.
THe 2900xt was a great card, its ONLY flaw was high power consumption and maybe 10% less performance than it should have had, and a teeny bit too expensive. Why, because it was a 90nm card that should have been on the 65nm process. The same reason they didn't get it to 65nm is the SAME reason the GT300 isn't out now, TSMC screwed ATi then, and the screwed Nvidia now, and they screwed Nvidia on the GT200 die shrink.
THe 2900xt was faster than the 8800gtx in bioshock in DX10. Don't forget, the 2900xt is DX10.1, 8800gtx doesn't support a lot of features it does. Keep in mind the 2900xt is faster than the 8800gtx in Assasins Creed when using dx10.1, as it would have been in every single game that SHOULD have used DX10.1. Its like the ultimate case of TWIMTBP, no the card will be faster because we screwed up dx10, lets ask Microsoft to change DX10, so the 2900xt loses 20% performance in DX10 games it should have, woo for us.
Its still a great card, and on 65nm, 20% cheaper, 10% higher clocks with the full original DX10 spec added to having 40% lower power consumption, and without TSMC screwing them it would have been better than 8800gtx from launch in every major title since Vista was released.
This is why I point out in threads, in Nvidia's defence, their late cards, both new and current having been so late isn't their fault, its TSMC's.
The lack of real DX10 for the past 2 years killing a lot of performance, and degrading IQ because of no tesselation, is a real kick in the teeth, why Microsoft ever agreed to do so I have no clue, well I do I guess, a big bag of money somewhere.
Frankly I'd go Nvidia again if they had a better price/performance bracket than ATi cards have. They won't though, the generation after GT300 I hope (for the sake of competition) that Nvidia finally switch to a small core strategy like ATi, because it means better value for money for everyone, and the sooner both companies are making cores running in clusters like ATi the sooner programmers will get the full performance available from that kind of design. Right now its easier to code for a bigger less complex Nvidia based core so they aren't fighting for every last drop of performance from the ATi design. If they both have a similar type setup like we had several years ago, we'd see better performance.
But the size, the lateness and the crapyness of TSMC mean GT300 ain't gonna be particularly cheap to produce
Last edited: