• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GeForce GTX 590 Key Features Revealed

Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
NFGM.

BTW,CUDA usage is not ubiquitous in all areas of computer science.

hhehe well, computer scientists (the real ones) only need paper and a pen to do all their work and keep em happy :p ... thaT's all computer scientists ever need... cuda is more useful in very applied fields like engineering and applied physics. but its not ubiquitous, certainly. fields that use them tend to use them because they are dealing with problems governed by large systems of equations.

and hey, jigger. i'm just replying to posts etc. u been on this topic atleast as long.. u could start that thread :o
 
Last edited:
Permabanned
Joined
31 May 2007
Posts
10,721
Location
Liverpool
no. it's thriving because its exactly what research needs. i used it myself recently in my work on factal geometry. and in solving nonlinear equations. my supervisor and his team uses it for image processing. i had the option of using opencl but it was a lot better to cut out the middleman and just go with cuda.
So explain to me, how would it have been possible otherwise? As I keep saying, the reason it's thriving is because it was the only option for a long time, and therefore was the only one that could be used, you don't have to be a genius to understand this. If it's the only option, what else could you have used? I know you say you've recently had the chance to use OpenCL, but that doesn't mean it's not down to CUDA having been around a lot more.


All that proves is that nVidia are happy to publicly contradict themselves. So they claim to be okay with PhysX and CUDA running on AMD, yet they set their drivers to disable it when an AMD card is detected in a PC? That doesn't make much sense does it?


as for computer animation thats what the gpu does natively. what cuda is useful for, in addition, is clunky to manage with in something like HLSL which u can use very well for animations.
I get the feeling you're not quite sure what you're talking about based on what you've just said. Computer animation isn't what the GPU does natively, traditionally computer animations are rendered on a CPU as single frames, then strung together to make scenes, using GPU computing, these final frames are rendered much much faster than on a CPU.


the industry has long been looking for a killer app that would make gpu algorithms viable for mainstream desktop users, but still no such app has been found. the peroblem lies in the fact that everyday apps are just not easy to convert to something like a massively parallel algorithm. some problems are just better suited to parallelization, such as the discrete fourier transform useful for solving partial diff eqs.

Well arguably, what "mainstream" use needs more power? Video editing maybe? I know there's CUDA acceleration for that, but from what I've heard, the output isn't particularly good. Outside of that, the majority of "mainstream" software (the likes that the average person is likely to use) isn't going to benefit massively from something like CUDA or even OpenCL.
 
Soldato
Joined
9 Nov 2009
Posts
24,824
Location
Planet Earth
Having said that, the Nvidia Linux drivers still have the edge when compared to the AMD ones so quite a few computer scientists I know still use Nvidia cards.

Anyway,AMD investing in OpenCL is only a good thing TBH. Competition is always healthy! :D
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
and i keep saying even if there had been competition it woukld not have stopped people from adopting cuda. the fact that opencl is open would not have nbeen a big factor in selection for research environments where people dont care much abouyt that sort of thing. this isnt that hard to understand. you keep trying to suggest that cuda is only popular because it existed in a competitionless vacuum for a while. and i tell u that's not necessarily true.



When you just say computer animation it can be a number of things - u shud explain urseslf better.and if it's graphics then something like HLSL is a good way of specifying the problem (regardless of whether or not they've been doing it on CPU all this time). u cud do the same with a engineering or physics problem and use shader language to solve it, but it's clunky as you have to cleverly disguise ur engineering problem as a graphics problem. that's why its a huge step up using cuda for this sort of thing. now, i never said there are no desktop apps for it. even if this computer animation problem is better represented with linear algebra rather than graphical operations (i.e. making it more suited for cuda than hlsl) it still doesnt change my point. i said there is no killer app that would justify its explosion to the desktop. ur welcome to try to find one.

the reason for this isnt as simple as requiring "power" as you put it in the last para. it runs deeper than that. some algorithms are just parallelizable for fundamental reasons. others arent

Oh and and I don't know where you get the notion that NVIDIA said it wont support CUDA/PhysX. Mistrusting their intentions is one thing. And perhaps justified given that NVIDIA tends to come off as the evil giant in the NVIDIA-ATI face-off. But I have not seen them say it outright for there to be a contradiction. If they've said it, I'd like to see it. In everything I have read. and according to what NVIDIA employees themselves have said on the CUDA forums, the tech is there for AMD to support but they refuse to pick it up.
Lots of articles on that too: http://www.extremetech.com/article2/0,2845,2324555,00.asp
 
Last edited:
Permabanned
Joined
31 May 2007
Posts
10,721
Location
Liverpool
and i keep saying even if there had been competition it woukld not have stopped people from adopting cuda. the fact that opencl is open would not have nbeen a big factor in selection for research environments where people dont care much abouyt that sort of thing. this isnt that hard to understand. you keep trying to suggest that cuda is only popular because it existed in a competitionless vacuum for a while. and i tell u that's not necessarily true.
At some point, CUDA was new, immature and flaky. That's my point, because at the moment, that's the truth of OpenCL. The reason why there's continued support for CUDA is for this very reason. "if there had been competition" is irrelevant, because there wasn't any competition. It was the only solution for people who wanted to run applications on the GPU. Once OpenCL is more mature and stable, there's less reason for CUDA's continued usage simply because OpenCL caters to a much wider user base, it's simple.



When you just say computer animation it can be a number of things - u shud explain urseslf better.and if it's graphics then something like HLSL is a good way of specifying the problem. u cud do the same with a engineering or physics problem and use shader language to solve it, but it's clunky as you have to cleverly represent ur engineering problem as a graphics problem. that's why its a huge step up using cuda for this sort of thing. now, i never said there are no desktop apps for it. even if this computer animation problem is better represented with linear algebra rather than graphical operations (i.e. making it more suited for cuda than hlsl) it still doesnt change my point. i said there is no killer app that would justify its explosion to the desktop. ur welcome to try to find one.
When I say computer animation, it's pretty clear what I mean, especially since I've already been talking about rendering and Vray as well. On top of that, "computer animation" is particularly specific, you wouldn't call games, "computer animation", whereas you'd call movies made by Pixar "computer animation".

the reason for this isnt as simple as requiring "power" as you put it in the last para. it runs deeper than that. some algorithms are just parallelizable for fundamental reasons. others arent
I'm well aware of that, but my point still stands, there are a lot of applications that simply don't need the extra power of parallelisation.

Oh and and I don't know where you get the notion that NVIDIA said it wont support CUDA/PhysX. Mistrusting their intentions is one thing. And perhaps justified given that NVIDIA tends to come off as the evil giant in the NVIDIA-ATI face-off. But I have not seen them say it outright for there to be a contradiction. If they've said it, I'd like to see it. In everything I have read. and according to what NVIDIA employees themselves have said on the CUDA forums, the tech is there for AMD to support but they refuse to pick it up.
Lots of articles on that too: http://www.extremetech.com/article2/0,2845,2324555,00.asp
Where did I say nVidia won't support CUDA or PhysX? Do you mean AMD? If you do, , as I've said a few times now, but you've ignored, the fact that nVidia took the effort to put in to their drivers something that disables CUDA and PhysX if an AMD GPU was detected speaks volumes of their intentions doesn't it? Why you place so much importance on nVidia's word when they're known to be liars is beyond me really.
 
Last edited:
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
At some point, CUDA was new, immature and flaky. That's my point, because at the moment, that's the truth of OpenCL. The reason why there's continued support for CUDA is for this very reason. "if there had been competition" is irrelevant, because there wasn't any competition. It was the only solution for people who wanted to run applications on the GPU. Once OpenCL is more mature and stable, there's less reason for CUDA's continued usage simply because OpenCL caters to a much wider user base, it's simple.

Perhaps. Perhaps CUDA was adopted because it was the only game in town. I just don't think that's why it was adopted though. My experience in academia/research, I feel, supports that.

Perhaps if OpenCL becomes awesome and widely accepted it may become more popular than CUDA. Perhaps. I wouldn't be too quick to jump on that bandwagon in agreement.

These things are usually far more complicated and the results can be surprising, a la Betamax-vs-VHS (This was intensely studied at the Santa Fe Institute as an example of deterministic chaos in a nonlinear system. You can read a highly accessible account of it in Waldrop's book "Complexity).

The superior product doesn't necessarily always win. There are complex reasons for that. Because nonlinear dynamics happens to be my main area. I tend to be conservative when making a prediction, handwavingly, about how a nonlinear system (like this whole CUDA/OpenCL business) will evolve. Because it is easy to be terribly hopelessly incredibly wrong. they are inherently unpredictable. But that's me. I have my reservations.

If you're convinced CUDA would not have succeeded had it competed against an equally fleshed out OpenCL or that it will eventually face its end to OpenCL then go right ahead. I'm going to hold on to my reservations about that though. I'll believe it when I see it.
When I say computer animation, it's pretty clear what I mean, especially since I've already been talking about rendering and Vray as well. On top of that, "computer animation" is particularly specific, you wouldn't call games, "computer animation", whereas you'd call movies made by Pixar "computer animation".

Well this is fundamentally dealing with graphics. 3D graphics. Real-time or not. So I'd expect it to not be as clunky solving it using something like shader language as opposed to, say, trying to disguise the three-body problem in physics in such a way as to let the GPU solve it. I could be wrong. I don't know much about animation. Perhaps these guys at Pixar and stuff use, say, ray tracing and it isn't easy to implement in HLSL as it's not a native feature of rasterised graphics.
 
Last edited:
Permabanned
Joined
31 May 2007
Posts
10,721
Location
Liverpool
Perhaps. Perhaps CUDA was adopted because it was the only game in town. I just don't think that's why it was adopted though. My experience in academia/research, I feel, supports that.
Come on, it's common sense. CUDA offered the power they need, there's nothing else to use, therefore CUDA gets adopted. There was nothing else to choose from, so they either had to use CUDA or go without the performance. It's like saying wheeled motor vehicles are only popular because of how good they are, when in reality they're the only viable means of personal transport at high speed, you either use or go without since there's no alternative.

Perhaps if OpenCL becomes awesome and widely accepted it may become more popular than CUDA. Perhaps. I wouldn't be too quick to jump on that bandwagon in agreement.
Really, come on, I'm not even talking about which one is "best", I'm talking about simple user base. Far more people have OpenCL capable hardware than CUDA only hardware. On top of it being in everyone's best interests for it to be an open non proprietary standard.

These things are usually far more complicated and the results can be surprising, a la Betamax-vs-VHS (This was intensely studied at the Santa Fe Institute as an example of deterministic chaos in a nonlinear system. You can read a highly accessible account of it in Waldrop's book "Complexity).
That is what I'd call a completely different and unrelated situation. It's not two different technologies fighting out for what's best as such. nVidia hardware can run CUDA as well as OpenCL applications.

The superior product doesn't necessarily always win. There are complex reasons for that. Because nonlinear dynamics happens to be my main area. I tend to be conservative when making a prediction, handwavingly, about how a nonlinear system (like this whole CUDA/OpenCL business) will evolve. Because it is easy to be terribly hopelessly incredibly wrong. they are inherently unpredictable. But that's me. I have my reservations.
I don't think it's unpredictable when you consider for the most part they're largely the same thing, except OpenCL can run on all hardware brands, CUDA can't. The end result will be no different, it's just about which one is currently being used, which is CUDA, therefore there's less you can do with OpenCL.

If you're convinced CUDA would not have succeeded had it competed against an equally fleshed out OpenCL or that it will eventually face its end to OpenCL then go right ahead. I'm going to hold on to my reservations about that though. I'll believe it when I see it.
Tell my *why* exactly CUDA would have became popular if OpenCL was in the same position as it? Same results, but one can work on all hardware, one's restricted to nVidia. No one would choose the closed standard if the open one is just as good.


Well this is fundamentally dealing with graphics. 3D graphics. Real-time or not. So I'd expect it to not be as clunky solving it using something like shader language as opposed to, say, trying to disguise the three-body problem in physics in such a way as to let the GPU solve it. I could be wrong. I don't know much about animation. Perhaps these guys at Pixar and stuff use, say, ray tracing -- which isn't easy to implement in HLSL as it's not a native feature of rasterised graphics.
When you talk about CUDA based applications, you're talking about GPGPU computing, for the most part, it has nothing to do with running games in the traditional sense, raytracing itself is a very parallel calculation which is why it's perfectly suited to running on a GPU. There's already examples as I've said, of raytracing running significantly better on GPUs than CPUs, like vray for example, which is a raytracing application.
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
"Tell my *why* exactly CUDA would have became popular if OpenCL was in the same position as it? Same results, but one can work on all hardware, one's restricted to nVidia. No one would choose the closed standard if the open one is just as good. "

Because it's not quite as simple as this. I already explained why a few posts ago. Get into it. Get CUDA compiler and use it. Work through the programming manual. Then give OpenCL a shot. End of the day:
1) CUDA and OpenCL are fairly similar. On NVIDIA hardware OpenCL acts a middleman to the cuda architecture. Many of us would rather not use a middleman. Porting CUDA to OpenCL is also a fairly trivial task
2) for specific problems you'll use a lot of specific hardware optimizations such that anyway you'd be doing some porting if ur running an advanced OpenCL algorithm on NVIDIA hardware and then port it to ATI-OpenCL .

In academic/industrial research as I've said this openness and working on more users systems is just not the reasons people pick tools. It's not a factor. It is a non-factor.

However perhaps you do have a point about when GPU computing enters the mainstream. Perhaps then CUDA will be confined to academia.

But if It's all the same to you, I'm going to remain unconvinced until more evidence surfaces.

Edit: Oh and I see my post count is nearing a 100 now. Free shipping, here i come!

And of course tehre is no question things like raytracing run better on the GPU. I never made claims to the contrary, nor did I disagree with that

Now if it's all the same to you, I'm gonna stick to my belief that CUDA will continue to have a place. Esp in academia.

It might just be that I'm a paid NVIDIA shill sent in to stir things up here. I was actually accused of that by someone a week or two ago. :)
 
Last edited:
Associate
Joined
24 Feb 2010
Posts
213
6990vs5906b2q.jpg


T1G3l4Xo0XXXcs5J75_055407.jpg_310x310.jpg


http://item.taobao.com/item.htm?id=9585904293
 
Last edited:
Associate
Joined
24 Feb 2010
Posts
213
Nice, hope they get launched today!! wanna see how much there going to cost as i may use the evga step up scheme to get one as its my birthday tommorow and you have to treat yourself dont you

Haha it's bday today, just got myself a pair Sennheiser HD650's :cool: Make sure you post back with impressions if you get a 590 ;)
 
Permabanned
Joined
17 Nov 2010
Posts
34
xsistor, you've been around long enough to know, had CUDA been an AMD baby, then kylew would be all over it like Charlie Demerjian waiting for his next AMD pay cheque.
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
xsistor, you've been around long enough to know, had CUDA been an AMD baby, then kylew would be all over it like Charlie Demerjian waiting for his next AMD pay cheque.

lol demerjain. On google his stuff is classed as satire. the dude does talk out of his ass a lot.

@Smak.. What the....!!!
Is that real?

At 607 MHz. Seems unbelievable. Then the MSI one will just mop up the floor with everything.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
32,618
Our lab does a lot of stuff with CUDA, reverse engineering Genetic Regulatory Networks, genetic programming, evolving circuit boards, FFT of large data sets. And this is true for the university as a whole, we recently remove a large computer cluster (200 cores) and replaced it with a Tesla based cluster.

CUDA is now the ubiquitous GPGPU interface. There are many reasons for this, and being first to the game was only a minor effect. The stable platform, support, wide-spread uptake, scientific publications, dedicated hardware, stable drivers. Every year representatives from Nvidia have come on campus and gave a 1-2 day seminar on CUDa and Tesla. Where is AMD? No where.
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
Yeah totally agree D.P. In research, whether industry or academia, those things matter far more than some "open tech" ideal.

If anything it would be very helpful to move the industry as a whole forward if AMD just supported CUDA as well, given the enormous amount of existing research on it
 
Back
Top Bottom