• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX680 to arrive at the end of February!

Associate
Joined
31 Mar 2010
Posts
790
Honestly I find the argument that since most games don't use 2GB of VRAM today more is pointless to be silly. Firstly it doesn't seem to actually be true, or if it is its only by a slim margin. Secondly I am not planning on upgrading every generation; I want my GPU to last as long as it can. Just because 2GB may be satisfactory today does not mean that it will be tomorrow.
 
Soldato
Joined
13 Mar 2008
Posts
9,638
Location
Ireland
Honestly I find the argument that since most games don't use 2GB of VRAM today more is pointless to be silly. Firstly it doesn't seem to actually be true, or if it is its only by a slim margin. Secondly I am not planning on upgrading every generation; I want my GPU to last as long as it can. Just because 2GB may be satisfactory today does not mean that it will be tomorrow.

That's how I feel as well. It reminds me of people back in the day saying you don't need 256MB ram because nothing uses more than 128.

If my old GPU didn't die, I'd not have the GTX 470. Even that struggling at 2560x1440.

I've not even bought BF3 yet as I know it'll probably roll over and play dead.
I also want my GPU to last few years, so I want good performance and a good amount of ram just to future proof a bit.

I'm hoping that Kelper brings a good boost over the current lineup. I'll be saving my money until its out before I decide on anything. I'd hope for 2GB at a minimum at least on Kepler.
 
Man of Honour
Joined
13 Oct 2006
Posts
90,806
http://news.techeye.net/chips/nvidias-kepler-suffers-wobbly-perturbations

Not looking good for nvidia getting things out sooner rather than later. Sounds like a waste of time to me.

Thats a load of rubbish tbh - possibly as its so laughably wrong put out to see who is passing off information without understanding it.

"Some industry watchers suggest that Nvidia gave up a lot of space on its chip, trying to buff up Kepler by bringing Ageia to the hardware."

PhysX takes up no additional space on the die if its supported or not.

"But the murmurs suggest Nvidia has been dedicating a lot of resources to get physics and fluid dynamics operating properly, which has so far, allegedly, taken half of its gaming engineers and six months to get right."

These are not fixed function hardware features but ran on the CUDA cores with no specific hardware modification required - the software implementation of PhysX which includes fluid dynamics is fully done and has been done and dusted for a very long time now - there may be some specific opptimisations to increase fluid dynamic simulation performance I'm not sure on that but that would be entirely incidental to the development of Kepler.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
Thats a load of rubbish tbh - possibly as its so laughably wrong put out to see who is passing off information without understanding it.

"Some industry watchers suggest that Nvidia gave up a lot of space on its chip, trying to buff up Kepler by bringing Ageia to the hardware."

PhysX takes up no additional space on the die if its supported or not.

"But the murmurs suggest Nvidia has been dedicating a lot of resources to get physics and fluid dynamics operating properly, which has so far, allegedly, taken half of its gaming engineers and six months to get right."

These are not fixed function hardware features but ran on the CUDA cores with no specific hardware modification required - the software implementation of PhysX which includes fluid dynamics is fully done and has been done and dusted for a very long time now - there may be some specific opptimisations to increase fluid dynamic simulation performance I'm not sure on that but that would be entirely incidental to the development of Kepler.

While its unlikely, don't forget there have been a couple rumours that Nvidia are going to make "wall street look like Ghadi" with some upcoming games..... with the implication being they have some big "sabotage the opposition" plans coming.

While Nvidia's implementation of Physx isn't hardware specific, you're wrong to say that Physx can't be fixed function, just because you CAN run something on software controlled shaders with Cuda doesn't mean it can't be magnitudes faster done on specific accelerated hardware.

Considering Nvidia is getting big into GPGPU and there are PLENTY of companies out there who could use acceleration for things like physics and specifically stuff like fuild dynamic's.

Sticking dedicated hardware to accelerate specific functions could massively increase the effective speed of a card in situations that could use them.

So could Nvidia be dedicating transistors to hardware accelerated functions...... hardly new AMD and Nvidia do that for many things, it depends how many and what they'd need to make it "worth while".

It could definately help in GPGPU situations and if their biggest GPGPU customers are interested in more decidated hardware acceleration to speed things up that is entirely an avenue Nvidia would look down.... really depends as said how much space/transistors it would take up vs how much it would get use and cost "normal" performance when its not able to be used.

However if they had some hardware acceleration they could both run physx faster, with more detail and likely costing little to no FPS drop as it wouldn't be done on shaders, and it would give them a very good way to stick it to AMD, pay dev to use physx for everything, make it magnitudes faster on Nvidia hardware.... watch benchmarks done in max quality and laugh at AMD coming no where near.
 
Man of Honour
Joined
13 Oct 2006
Posts
90,806
I'm not saying PhysX can't be fixed function (the original Ageia PPU was largely fixed function) - I'm saying nVidia haven't implemented it like that - it would take a massive rewrite of the API and sacrificing a huge amount of the GPU die for very marginal gains (unless applications go all out on implementing physics properly at an intrinsic level).

nVidia could bet the boat on a GPU that had fixed function physics hardware and pay a few devs to make games that have movie level of physics interaction in the hope that once people have experienced that level of physics they wouldn't want to go back - which if it took off would leave AMD in a very bad position but I don't see it happening.

Fluid dynamics is a big one tho - if you look at flowing water usually implemented in a game - even one with a hardware PhysX implementation it looks very cheap and nasty compared to CGI water in a movie, thats one I definitely wouldn't want to go back to after seeing properly done water in a game.


EDIT: To put it into perspective if you implemented PhysX with fixed function hardware in the Kepler core to viably handle next generation physics simulations your talking in the region of 900M extra transistors - possibly a little less as I'm unsure on the exact details of some bits of it.
 
Last edited:
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
While Nvidia's implementation of Physx isn't hardware specific, you're wrong to say that Physx can't be fixed function, just because you CAN run something on software controlled shaders with Cuda doesn't mean it can't be magnitudes faster done on specific accelerated hardware.

Well yes it does... Physics computations are parallel floating-point operations. If you want to perform floating point operations with a GPU then you will be using the shader cores. That's the way GPUs perform floating-point arithmetic. The precise description of ANY computation is described by software, and executed by the shader cores.

... Unless you're suggesting that Nvidia are reverting back to a pre-8800GTX design that reintroduces fixed-function logic for MADD operations? :confused:


Considering Nvidia is getting big into GPGPU and there are PLENTY of companies out there who could use acceleration for things like physics and specifically stuff like fuild dynamic's.

Sticking dedicated hardware to accelerate specific functions could massively increase the effective speed of a card in situations that could use them.

I write code for CFD simulations for a living, so I know a thing or two about this stuff. Suggesting that somehow a dedicated "fluid dynamics" gizmo could be added to a GPU is utter rubbish.


Fluid dynamics simulation - as performed commercially - is all about the rapid solution of sparse linear systems. This is true for all practical approaches (finite volume, finite element etc). The two main computational aspects to the problem are:

a) Matrix preconditioning
b) Solution of the linear system

The algorithms to perform these operations are built up from a standard linear-algebra toolbox, which describes such basics as matrix-vector multiplication. The most common of these is known as BLAS, and there has been a direct CUDA wrapper for BLAS for over three years now. Having a CUDA BLAS library allowed thousands of relatively simple research codes to take advantage of GPU acceleration.

At the level above this lies complex solver libraries. This is where most of the development work is being done right now (e.g. the PETSC database which is underpins many large commercial codes). Once the more sophisticated "standard" solvers are easily accessible via CUDA, a whole range of complex scientific codes will be able to take advantage of GPGPU computing (of which fluid dynamics is only a subset).


So you see, the key to the successful application of CFD on a commercial scale lies mainly in software. We have been developing code for serial and small-scale parallel CPU clusters for 30 years now. The algorithms for efficient massively-parallel computations, GPGPU style, are lagging well behind the hardware.

There are, of course, improvements to be made on the hardware side. But, for the purposes of improving existing CFD simulations this is an exercise in general compute efficiency. It's about executing threads more efficiently, and passing data between parallel threads more rapidly. These are the things both AMD and Nvidia are looking to bring to their compute-oriented architectures.



... And just to help you keep track, this is another one of those times that you're talking rubbish about things you don't remotely understand (re this discussion).
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
While its unlikely, don't forget there have been a couple rumours that Nvidia are going to make "wall street look like Ghadi" with some upcoming games..... with the implication being they have some big "sabotage the opposition" plans coming.

HAHAHAHAHA talk about trying to push an agenda and blowing your own trumpet..

Surely you're referring to this:

http://semiaccurate.com/forums/showpost.php?p=149788&postcount=323

Charlie said:
This time around, Nvidia's games are going to make Wall Street look look like Ghandi. I am in shock.

-Charlie

followed by this: http://semiaccurate.com/forums/showpost.php?p=149814&postcount=330
Drunkenmaster said:
Charlie said:
This time around, Nvidia's games are going to make Wall Street look look like Ghandi. I am in shock.

-Charlie

Does that mean you're hearing about a lot of Nvidia sabotage in games, IE removing or slowing features for AMD< or more like paying for even more games to have basic crap removed and re-added with slow Physx coding?

Look at how he's literally writing the words for the guy now. Even trying to stuff words into his mouth...

So the alleged rumours that "Nvidia are going to make "wall street look like Ghadi" with some upcoming games" was first suggested by Charlie Demerjian, who frankly, is a joke.... And THEN they were, in true AMD-Crusader style, interpreted by DrunkenMaster as "have some big "sabotage the opposition" plans coming."...

Despite hearing no confirmation of this, he proceeds to assume that it has now become fact, and then now talks about these reported rumours (THAT HE HIMSELF INVENTED!) like it were confirmed and irrefutable fact.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Well yes it does... Physics computations are parallel floating-point operations. If you want to perform floating point operations with a GPU then you will be using the shader cores. That's the way GPUs perform floating-point arithmetic. The precise description of ANY computation is described by software, and executed by the shader cores.

... Unless you're suggesting that Nvidia are reverting back to a pre-8800GTX design that reintroduces fixed-function logic for MADD operations? :confused:

So you see, the key to the successful application of CFD on a commercial scale lies mainly in software. We have been developing code for serial and small-scale parallel CPU clusters for 30 years now. The algorithms for efficient massively-parallel computations, GPGPU style, are lagging well behind the hardware.

There are, of course, improvements to be made on the hardware side. But, for the purposes of improving existing CFD simulations this is an exercise in general compute efficiency. It's about executing threads more efficiently, and passing data between parallel threads more rapidly. These are the things both AMD and Nvidia are looking to bring to their compute-oriented architectures.



... And just to help you keep track, this is another one of those times that you're talking rubbish about things you don't remotely understand (re this discussion).

Seriously, you use another completely incorrect post by Xsistor to try and make your point that I'm wrong.

Here's a hint, if you can get 800Mhz at 0.9v, 900Mhz at 0.95v, 1000Mhz at 1.05v, 1100Mhz at 1.2v, 1200Mhz at 1.5v, etc, etc....... you'll find that is most definately exponential. His entire diatribe is both hilariously stupid and hilariously wrong as is everything he ever responds to me. I called him stupid and got banned because him and his dupe account both laughed at what can really nicely be put as, complete inability to read. I stated that I specifically DID NOT have to post a mathematical proof to be able to then state that 9.8 the value for gravity. TO which I think there were maybe 4 posts of Krugga and Xsistor incorrectly assuming I said that WAS a mathematical proof.

Every single time Xsistor has said I was wrong, he's been incapable of reading what I said. Which makes you basing your arguments on what he said as.. misguided.

As for your diatribe of more nonsense, no, shaders are, both complex and very simple, you can without question speed up ANY calculation by adding more complex units that do several calculations in one, magnitudes faster, the issue is always, how much faster, how often is it used, overall what is the better option, more smaller wider use compute parts or less, bigger, specific and not always useful parts. Saying you can't proves you know absolutely nothing.

Just because better algorithms would speed up fluid dynamic calculations on gpu's, DOES NOT EQUATE to better designed, more complex and more specific hardware couldn't do it faster.

This is the case for almost anything you do on a computer. It's really as simple as this, you can add 1 + 1 + 1, or have a more complex shader that can do that three times. I'd be not shocked but downright amazed if there wasn't a single calculation that couldn't be speed up.

But that is where AMD/Intel with cpu's and gpu's are at, multifunction, simple enough that almost any software can be ported to work on it even if many calculations take many clocks and many separate calculations.

Look at video acceleration, everything and I mean everything Intel do with quicksync can be done on a gpu, just slower. WHat is a very small gpu is many times faster than AMD or Nvidia's biggest gpu, its also highly limited. These are the trade off's.

I didn't bring up fluid dynamics, the ARTICLE did, and I wasn't saying Rroff was wrong about it being unlikely, just wrong in that while Physx can be run in software on just about anything, it can and would run faster on hardware built to run with it specifically and that could give Nvidia a MAJOR advantage and the article isn't incorrect.... just unlikely.


As for Xsistor again and his ridiculous TDP argument, yes when he was spouting utter crap and misreading everything I wrote on purpose to cause an argument, or out of sheer inability to read........ he insists repeatedly that I don't know what TDP based on his interpretation of things I never said.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
HAHAHAHAHA talk about trying to push an agenda and blowing your own trumpet..

So the alleged rumours that "Nvidia are going to make "wall street look like Ghadi" with some upcoming games" was first suggested by Charlie Demerjian, who frankly, is a joke.... And THEN they were, in true AMD-Crusader style, interpreted by DrunkenMaster as "have some big "sabotage the opposition" plans coming."...

Despite hearing no confirmation of this, he proceeds to assume that it has now become fact, and then now talks about these reported rumours (THAT HE HIMSELF INVENTED!) like it were confirmed and irrefutable fact.

More incapable of reading stuff I see, does it ever stop with you.

Charlie
This time around, Nvidia's games are going to make Wall Street look look like Ghandi. I am in shock.

-Charlie


So please pray tell what did he mean by this?

While its unlikely, don't forget there have been a couple rumours that Nvidia are going to make "wall street look like Ghadi" with some upcoming games..... with the implication being they have some big "sabotage the opposition" plans coming.

Please show me where I said this was FACT? go on.... was it the word, rumour? was it the word, implication? I'm failing to see the word fact, and I'm still waiting, as most people have been, for Nvidia trolls to post all the proof of all the times Charlie was wrong. You guys claim he's a joke and frequently wrong..... everyone asks you for a single shred of proof of this, threads go quiet, and you're all back in the next thread calling him a joke.

You DID state a claim that Charlie was a joke, yet, that is one you can't prove.


So you've both lied, misread, and made a false claim, AGAIN.

Lets point this out, I haven't assumed anything as fact, nor claimed it, lie number 1. Lie number 2, Charlies's a joke(yes I'm again interpreting that as, he's always wrong, an AMD shill and makes crap up...... prove it), lie number 3, I was blowing my own trumpet?

It's funny, the rumour is from Charlie, I made no claim to this rumour, I made no claim to finding this on my own, I made no claim whatsoever and I didn't link to my own posts..... how on earth was that blowing my own trumpet?

Please again LEARN TO READ, I posted CHARLIE's RUMOUR, and I posted what I assume it to mean. IF I'm misreading what he said so badly, please post what you think Charlie meant by

This time around, Nvidia's games are going to make Wall Street look look like Ghandi. I am in shock.

-Charlie

Go on, I'd LOVE to see what you think this means and how I infered something so drastically not related from it, seriously, post ONE possibility that isn't about Nvidia sabotaging games somehow. Wall street look like Ghandi, hmm, maybe I'm misreading that, Ghandi was evil right.... and Wall street are known to be generally top people so Nvidia is going to make Nvidia's games look like the most honest thing since Jesus?
 
Last edited:
Back
Top Bottom