• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

nVidia GT300 - GeForce GTX 380 yields are sub-30%

Caporegime
Joined
18 Oct 2002
Posts
33,188
First lot as they said are going to be buggy, its expected to be buggy and any working parts at all is considered a bonus... it has little direct correlation to the results of the second run.

I didn't think they started talking about yields until further along the line so seems someone has come up with their own interpretation for what the experts were saying.

Only gonna start worrying personally if we see low numbers of working parts off debugged dies - or they never even manage to get enough working dies to debug at that stage.

Buggy is one thing, frankly the bugs should be few, easy fixes and not a real issue. Yields are a HUGE issue, bigger than bugs, lots of errata ends up in full production chips, i7/i5/P2's/4870's/280 all tend to have bugs that need software work arounds or, simply failed features that never work in the final product.

But if you can't produce the chips, and with yields, you're not necessarily talking about working chips as discussed, you're talking about chips that once cut up into transistors with lasers, match the engineering design to a micron and when a working design will also work properly. A failed chip will likely be silicon that simply didn't come out right, if you can't make many of the chips you send the company the design of, if thats a non working buggy chip or a final production chip you have a major issue.

Not massively surprised, by all accounts we were supposed to have 40nm 4XXX series parts late last year, then early this year, then because of the close time between products they skipped the proper refresh and went straight to the 5XXX series. TSMC have a crappy process, whats laughable is the reports out of TSMC repeatedly over the last couple months that the process is completely fixed, yields rock and leakage isn't a problem, such BS.

THe 4770 had problems with significantly less transistors than the 5870 parts, the GT300 is still going to make the 5870 series cores like tiny, it was always going to have issues.

I've said it before, it IS TSMC's fault they promised a product they can't provide, you can only design whats been promised to be ready. However ATi's problems with the 2900XT weren't secret and were ENTIRELY due to TSMC. They made a drastic change to their business model because they knew TSMC weren't reliable, and a smaller core is simply easier to make, it was one of many reasons to go small core design.

Nvidia knew TSMC sucked, they screwed Nvidia on multiple processes, that they didn't see this problem coming is Nvidia's fault. They really should have switched when ATi did to a small core product that negates a lot of problems with bad processes.

EDIT:_ Rroff, its also worth pointing out this is in no way their first spin of silicon, like ATi will easily be on their 2nd spin, if not third, Nvidia have had 1-2 spins fail already, still at unreleaseable product yields, and non working products after the 3rd/4th spin is abysmal in terms of for Nvidia, I still say its TSMC's fault but, well, clockspeed and size are a huge issue for leakage, which is the 40nm process's biggest issue. I do wonder if Nvidia might have to simply go after the same area's as ATi this round, IE a lower clocked smaller core where the highest end 280gtx equivilent is scrapped completely and they cut the core size down, shaders and try and produce a smaller core. Nvidia have a far larger core and problematically, less shaders but massively higher clocked, ATi's highest clocked core parts are what, 750-800mhz on the 5870, Nvidia will even with no increase from previous gen, be at 1.3-1.5Ghz on their shaders, again making the leakage worse.

THe other issue with Nvidia's design style seems to be massive delay with their mid/low end parts, they could scrap the high end and go full out getting their mid/low end out to fill OEM orders and keep sales up because thats where the money is anyway, then hope TSMC pull their finger out at some point and actually fix it rather than just releasing a statement saying they've fixed it.

Its laughable that Nvidia really will seriously consider having GloFo make their cards in a couple years, and will probably make more money off higher yields and better processes at an AMD plant rather than stick with TSMC. Infact considering GloFO just bought out one of the main rivals to TSMC, they have some fabs now other than the Dresden AMD fabs, we could be seeing ATi/Nvidia gfx parts made at AMD run plants far sooner than we all thought. GloFO have the cash to dump a lot of kit and refurbish a lot of the new small fabs in state of the art kit and get manufacturing pretty quickly.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,158
EDIT:_ Rroff, its also worth pointing out this is in no way their first spin of silicon, like ATi will easily be on their 2nd spin, if not third, Nvidia have had 1-2 spins fail already, still at unreleaseable product yields, and non working products after the 3rd/4th spin is abysmal in terms of for Nvidia, I still say its TSMC's fault but, well, clockspeed and size are a huge issue for leakage, which is the 40nm process's biggest issue. I do wonder if Nvidia might have to simply go after the same area's as ATi this round, IE a lower clocked smaller core where the highest end 280gtx equivilent is scrapped completely and they cut the core size down, shaders and try and produce a smaller core. Nvidia have a far larger core and problematically, less shaders but massively higher clocked, ATi's highest clocked core parts are what, 750-800mhz on the 5870, Nvidia will even with no increase from previous gen, be at 1.3-1.5Ghz on their shaders, again making the leakage worse.

Article presented it as if it was the first spin - tho I wasn't sure if they were just rehashing old news or not as this isn't the first time the subject has come up hence the thread. If they are onto the 2nd or 3rd spin then its more of an issue.
 
Soldato
OP
Joined
7 May 2006
Posts
12,192
Location
London, Ealing
Originally Posted by future Semi-Accurate article
The downward-spiraling wreckage of fail continues today as it has been revealed by reliable sources that NVIDIA's "next-gen" GT300 GPU will use more power than any GPU in the history of the industry, no doubt in a wreckless, zealous bout to reclaim some semblance of long-lost glory and prestige the company might have once had (though apparently too long ago for anyone to remember). Instead of any sort of responsible or daresay competent target power envelope for their last desperate effort in the highend market, NVIDIA has decided to abandon any restraint in gluttony for a staggering 400W consumption level just for the card alone, putting even entire desktop machines to shame.

The same source also revealed a few tidbits of the performance characteristics we might expect from such a ghastly beast, and despite high expectations for hardware that could bring an industrial-strength power generator to its knees, reality bites back to paint quite a different picture. Information reveals performance should be around 10% less than current-gen GTX 295 overall, despite completely broken and unusable driver suites powering the bloated predecessor. It should then come as no surprise that NVIDIA has not only lost the consumer-confidence crown as we've become accustomed to, but also the sheer performance race that it seems they alone care about these days. NVIDIA has truly become worthless even by their own standards. Perhaps they should have just renamed their old parts instead, since that seems to be one thing they're good at.

In other news, NVIDIA sucks, I hate them and they are big fat doodoo heads.

This has no link & is a preview of Charlie Demerjian's next article:
It could be fake.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,158
I'm assuming thats half satirical... theres no way it would be 10% less* than the 295GTX even with horribly broken drivers if its even close to the specs nVidia are heading for... the power consumption could be true tho.


* The new shader architecture is automatically 30-33% faster shader for shader, clock for clock than the 200 series and the high end part is likely to have 512 SPs at a higher clockrate... with the MIMD arch. they are also capable of atleast 4+x the workload in some situations compared to the 200 series... ROPs and TMUs are atleast doubled over the 285GTX as well.
 
Last edited:
Associate
Joined
27 Nov 2002
Posts
2,482
Location
Ireland
Ouch, looks bad! Very surprised that it looks like NV have made the same mistake twice in ~16 months, and it sounds like it's even worse this time around.
It is ironic that an ATI/AMD spinoff company, Globalfoundries, could benefit NV against ATI and maybe even Intel. However I don't see it happening anytime soon.
 
Soldato
Joined
24 Jul 2004
Posts
22,594
Location
Devon, UK
I'm assuming thats half satirical... theres no way it would be 10% less than the 295GTX even with horribly broken drivers if its even close to the specs nVidia are heading for... the power consumption could be true tho.

I can't see it being true either.

That is, unless they're scrapping the jump to 40nm and trying to shoehorn as much power onto a 55nm die as they can.

That might well do it.
 
Associate
Joined
27 Jan 2009
Posts
321
Location
Cambridge, UK
Buggy is one thing, frankly the bugs should be few, easy fixes and not a real issue. Yields are a HUGE issue, bigger than bugs, lots of errata ends up in full production chips, i7/i5/P2's/4870's/280 all tend to have bugs that need software work arounds or, simply failed features that never work in the final product.

I can't remember from my Uni degree (hah wonder why... :D) but isn't yield an inverse square law or some such of the die size (horrible statistics stuff). So Nvidia are really shooting themselves in the foot again... Like I said before having the 280 die at the limit of TSMC cutting process = not good. TSMC clearly does not have Intel's R & D might/funding and process QC.

Just to give us folks a better idea on the yield and the problem facing Nvidia... Isn't the wafer cost fixed - scaling down over the lifespan of a part... So 4 wafers @ $5,000 (?? - would be much more of course for short early runs) each with a 1.7% yield = 7 good dies - gives a part value of >> $2,857 each!!:eek::eek::eek: (Figures plucked out of the air - but you get the idea...)

Time for Nvidia to scrap the 380GTXi(Limited Edition) die and produce some 300GUFF(lite) dies? :cool:

Bob
 
Man of Honour
Joined
13 Oct 2006
Posts
91,158
I can't see it being true either.

That is, unless they're scrapping the jump to 40nm and trying to shoehorn as much power onto a 55nm die as they can.

That might well do it.

TBH and strictly IMO they would be better off reworking the 200 series for DX11 and finding a compromise on the diesize... dunno if its possible to get 45 or 50nm processes... they already can match ATI on the fillrates and that should give them enough space to up the number of SPs to compensate.
 
Soldato
Joined
24 Jul 2004
Posts
22,594
Location
Devon, UK
Thing with going to something like 45 or 50nm - I don't think any Fab has tried those particular nodes so it's something that would have to be worked from scratch.

If it's going as wrong as Charlie says (which I doubt, but IF) then they should just shrink GTX275/285/295 to 40nm, add Dx11, and re-release.

Sure, they'll lose the performance crown I expect, but then they can go back to the drawing board and look at ways to start getting power consumption and heat down, and efficiency up.
 
Back
Top Bottom