• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Announces Open Physics Initiative

Soldato
Joined
16 Jun 2009
Posts
7,664
Location
Cambridge
Valve is the only developer i have seen that have used never before tried physics to effect the gameplay.

Bricks falling down in batman did nothing to batman himself, what is the point in using physics to accurately portray an object that does not effect how you actually play the game.

The current problem is that the game needs to play the same with or without accellerated physics. This inherently limits the amount of Physics the devs will implement that actually does affect gameplay as they don't want to target a small percentage of consumers.
I would expect that over the next few years, with more multicore CPUs deployed, and the adoption of physics libraries that are not tied to a specific vendor, then developers will have more confidence in actually using it.
I'm still waiting for walls in FPS games which can be shot down brick by brick, followed by a building collapse that accurately represents how the walls were demolished. I was not personally impressed with the physics in crysis for instance where things just didn't appear to have any weight (You brushed past a piece of corregated iron on the floor, and it just went into spasms. Just not realistic at all).
 
Last edited:

AMG

AMG

Soldato
Joined
18 Aug 2008
Posts
4,700
Location
lincs, spalding
I do too

it shouldn't be vendor specific I think that a ATI gamer should get the same expirance as one with nVIDIA would, and the other way around.

there is one game I can think off were the weight is shown off nicely and that is red faction 3, okay still a little flawed but by god its a lot better than crysis for that :eek:
 
Soldato
Joined
22 Aug 2008
Posts
8,338
If system-sellers were forced to ensure all PCs labelled as "gaming" machines had a GPU-physics capable chip in it, it could free up devs to explore their imagination to a fuller extent.
 
Soldato
Joined
26 Apr 2004
Posts
9,355
Location
Milton Keynes
Lets see if this takes off then, I've been supportive towards a non-proprietary physics solution since Physx was bought out by Nvidia. Anything which aims to open that towards a wide market is a good step in my mind; although of course we will have to see if Bullet takes off.
 
Soldato
OP
Joined
7 May 2006
Posts
12,192
Location
London, Ealing
Following in the wake of several controversies regarding Nvidia's closed proprietary physics engine, PhysX, AMD have officially announced an initiative to expand the use of game physics using the open source Bullet Physics engine.

Nvidia have used PhysX as an marketing tool since 2008, by making it proprietary. Basically if you want PhysX - buy an Nvidia GPU. As a result, never gained widespread acceptance by game developers, only appearing in a handful of games. Nvidia's recent decision to disable PhysX when a Nvidia GPU was used in association with a ATI Radeon GPU has further angered developers and gamers alike, who went out and bought a mainstream Nvidia GPU just for PhysX. Developers have talked about the need for an open source physics engine that can be used on any GPU - leading to widespread adoption of game physics.

This is what AMD seeks to do - by leveraging industry standards such as OpenCL, DirectX and Bullet Physics - allowing not just AMD and Nvidia, but other hardware platforms, such as game consoles to real-time physics.

Pixelux's Bullet Physics is currently the third most popular physics library after PhysX and Havok. It remains to be seen how Bullet compares to PhysX and Havok in terms of features - though Bullet is the preferred library in the CGI industry for movies. However, such a broad open source initiative might just finally bring high level physics and simulation to the mainstream - regardless of what hardware you own as it would encourage developers - they will be certain that every gamer gets to experience the physics effects.
 
Permabanned
Joined
31 May 2007
Posts
10,721
Location
Liverpool
This is looking interesting.

Who makes it doesn't bother me, what it works on does and who has control over it.

At least this is looking to be an opensource thing.
 
Soldato
OP
Joined
7 May 2006
Posts
12,192
Location
London, Ealing
Product: AMD ATI Radeon HD 5870
Company: AMD
Authour: James 'caveman-jim' Prior
Editor: Charles 'Lupine' Oliver
Date: September 23rd, 2009
DirectCompute - DaHoff on the Job

DirectCompute

DirectCompute is the new buzzword. This part of DirectX 11 allows developers to create whatever additional hardware accelerated resources they need - AI, Physics, whatever. Codemaster's D.I.R.T. 2 preview at the AMD VISION launch showed to great effect how DirectCompute physics can improve game realism and immersion. With realistic changes in surface handling, driving the Colin McRae rally car through water generates bow waves and leaves wakes, which affect the handling of the cars following differently - less water means less drag. So simple and lifelike, it increases the playability of the game on DirectX 11 hardware.

AMD has a secret weapon in the DirectX fight - DaHoff! David Hoff is Director of AMD's Advanced Technology Initiatives team, inside the office of the Chief Technology Officer - one Eric Demers. Dave also used to work somewhere else ... I wonder if you can guess where?

Let's hear a little from the man himself:

Dave Hoff:

"When I saw AMD's public commitment at about this time last year to OpenCL and DX11 compute shader (now called Direct Compute), and having seen AMD's success with the Radeon HD 4800 series, I really wanted to join [AMD].

Things are a bit different for me here at AMD, working for SirEric in the office of CTO. I'm actually encouraged to charge after these kinds of new initiatives (my role is director of advanced technology initiatives).

One of the first things I did was meet with Havok, introduce them to the amazing engineering team I have here and explain that we could implement some of their code in OpenCL thereby enabling them to achieve acceleration on not just ours, but also Nvidia's GPUs. So we ventured into a quick little project to gauge the technical feasibility as well as if it was a good climate and team dynamics for our organizations to collaborate.

While we learned the answer to both, I can only report on the technical feasibility since we demonstrated Havok Cloth at GDC in March running in OpenCL on our Radeon HD 4890. In terms of productization, we're waiting for our OpenCL tools to complete conformance acceptance (they've been submitted to Khronos) and will likely need to get through some solid beta usage and up to a production state before an OpenCL-based Havok solution would be ready.

Then it's really up to Havok if they want to bring this to market. I'd like to see them do this particularly with their cloth product since game developers can incorporate cloth late in their development cycle and our OpenCL implementation is generally transparent to the Havok API.

And while there were some amazing software developers to jump in early and use the initial proprietary GPGPU programming models provided by both graphics companies, the adoption rate is going to really take off now that there are these new standards. As you heard last week at our launch event from Cyberlink
, for example, they will obviously now consolidate and only go forward with programming in one API (in their case it seems to make sense to use Direct Compute).

I can't imagine any commercial software company who has tried a GPGPU programming model previously from either graphics company to not switch to OpenCL or Direct Compute. It's very easy to move from CUDA to either of these.

As you heard me describe [at the AMD VISION event], in the meantime, we've been particularly excited about what Pixelux can do. Their physics effects are amazingly realistic compared to anyone else. And their tools are great.

Their commitment to integrating with the free, open source Bullet Physics engine and doing OpenCL acceleration fits great with our commitment to OpenCL work on Bullet. Both Bullet Physics and Pixelux's DMM engine are already available and used in games and films, so developers can start right now and pick up the GPU acceleration as we role that out.

On the other hand, as I think you've seen from the PhysX side of things, while they seem to talk about support for openness when they're backed into a corner, apparently in a recent driver update they've actually disabled PhysX running on their GPU if an ATI card is used for rendering in order to pressure users to use an all Nvidia configuration.

The contrast should be fairly stark here: we're intentionally enabling physics to run on all platforms - this is all about developer adoption. Of course we're confident enough in our ability to bring compelling new GPUs to market that we don't need to try to lock anyone in. As I mentioned last week, if the competition altered their drivers to not work with our Radeon HD 4800 series cards, I can't imagine them embracing our huge new leap with the HD 5800 series.

While it would be easy to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open.

As you may figure from my CUDA role, I was the guy responsible to get developer adoption. In addition to being a nut about SDK quality and following developers closely on the forums to initiate feature requests or critical fixes, I initiated the first ever consumer video transcode app with a partner using CUDA and delivered this to reviewers as part of the GTX280 launch, and I enabled developers to easily use notebook computers with CUDA-capable GPUs.

Of course, (in their brilliance - and why I left) folks over there abhorred this work I was doing to generate adoption since it didn't appear obvious enough that it would directly lead to Tesla sales ... (not only non-open, but even practically proprietary among their brands). At least they've eventually seen the light and seem to mention video transcoding now in about every breath ...

GPU F@H was also a project I initiated and ran at Nvidia I also started Nv's Folding@home team (team W.A....) initially for my test suite of machines. I'm still sometimes surprised at the enthusiasm around this.

The engineer I was able to borrow to do the CUDA implementation at Nv is amazing. He did an entirely different implementation than previous. This had some good new algorithmic tricks and was one of the best utilizations of the G80 architecture's shared memory. If anything, it would likely do even better if they had more than the 16KB shared memory size on Nv GPUs.

That's where it will get fun going forward. For DX11 direct compute support (specifically CS_5), all devices going forward will have double the g80 shared memory to 32KB. Also, Stanford finally has the new algorithm publicly available in a new molecular simulation package. So all ATI's new devices will basically be better at this since we added that shared memory for DX11. A good reason for anyone buying a new card to get a DX11 card.

Going forward. I'd expect the new algorithm to get ported over to OpenCL (which can take advantage of the 32KB local memories). I'd guess the porting will wait a little while longer until the OpenCL SDK's get a little more mature and optimized. We've just gotten our OpenCL implementation through official conformance verification.

So with the new HD5800 series and a decent optimizing OpenCL implementation, I expect some amazing PPD - new performance champs that will span our price line of GPUs.

I'm also excited to see how ultimately the OpenCL Folding implementation runs on CPUs. We've put a lot of work into our multi-core CPU implementation of the OpenCL compiler and run-time. As we get the OpenCL port of Folding done, as you mention, it will get Linux, Mac OS X and other OSes, but also other platforms that support OpenCL. Perhaps our CPU implementation will be an improvement as well."

Wow. Now there's a fellah who likes his job. At this point w'ddlike extend my sincere thanks to Dave Hoff, Eric Demers and Dave Baumann, plus the many others I corresponded with, for their help. You guys rock!corresponded with, for their help. You guys rock!

Overall, I liked the comment from Mike Gamble, Crytek Licensing Manager regarding DirectX 11:

Mike Gamble:

"Free for use; no resources needed to make it work; increased fidelity is a win/win"

With CryEngine 3 capable of running DirectX 9,10 or 11 modes, Codemaster's EGO engine fully overhauled to use DirectX 11, and Unreal Engine 4 designed for DirectX 11, there are going to be a lot of titles supporting it.
http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
 
Associate
Joined
16 Nov 2006
Posts
753
Direct compute test created by Nvidia.


When things become open standard, it Ati generally wins ;)

20136.png


Nvidia's cards weren't optimised but then neither were Ati's since it's Nvidia's test.
 
Permabanned
Joined
31 May 2007
Posts
10,721
Location
Liverpool
It looks like the DX11 compute doesn't work with more than one nVidia GPU at a time as the GTX285 gets more than the 295.

The performance difference between the 285 and 5870 looks about right though.

I'd make a guess that when optimised the 295 would get around 60-65 FPS on that.
 
Soldato
OP
Joined
7 May 2006
Posts
12,192
Location
London, Ealing
OpenCL is in Khronos, and is an open standard available on multiple platforms. It was developed by Apple, AMD, Intel and NV, as well as multitude of other hardware vendors, such as ARM, Broadcom, IBM and others. As well, SW ISVs such as Electronic arts, Blizzard and others have all contributed to making it a viable, open standard that can run on CPUs and GPUs optimally (see http://www.khronos.org/opencl/).

And running optimally on CPUs is important -- there are many cases where running on the CPU is faster than on the GPU, since transfering data to/from the GPU actually has a cost; a light job generally is best served on the CPU, especially with modern multicore CPUs. It's important to offer good performance on all platform devices.

AMD will also support Microsoft's DirectCompute as an industry standard (as opposed to open standard). Again, DirectCompute is aimed to be able to run on all hardware platforms.

Talking to many ISVs, they have used proprietary standards such as AMD's Brook+ or NV's CUDA. But most of them see the need to switch to IHV agnostic code bases, that have guaranteed forward compatibilty and support. It's really a business decision, at heart -- do you want portable code supported by many, or just something that runs on one guy? One can buy "brand" loyalty for some time, but not forever.

The above AMD FAQ is dated and refers back to our Brook+ code base, which will continue to be supported for a year, but has been open sourced for any developer to use, in particular for some that continue to have code written in it.

Building on top of open standards make sense as well -- so we announced that Sony's Bullet, an open source physics API, will be supported by AMD, and an OpenCL backend that runs on all HW platforms will be developed. There's actually a CUDA backend already. Pixelux will also be available on OpenCL, to extend bullet, and will run on everybody's HW.

These are all good developments, that allow companies to develop on all hardware, and get the best performance out of the available platforms out there. It's most helpful to PC game developers, that have quite a job to develop for a segmented PC market.
http://www.rage3d.com/board/showpost.php?p=1336033477&postcount=21
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
It looks like the DX11 compute doesn't work with more than one nVidia GPU at a time as the GTX285 gets more than the 295.

The performance difference between the 285 and 5870 looks about right though.

I'd make a guess that when optimised the 295 would get around 60-65 FPS on that.

For directcompute/GPGPU type processing you can scale across multiple cores... _if_ the support is there... with 99+% efficency so it should weigh in around 75-78fps. Unlike SLI in general where you get ~70% gains on average. If we use the crossfire scaling as an example (and I suspect its CPU limited from hitting the max) you'd still manage 71-72fps with the 295.

The joke is its nVidias test and they don't have multi GPU support up and running :S
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,053
And despite what drunkenmaster says... physics can and will drive innovation in games when a mature open hardware platform is available... tho I agree innovation in games isn't intrinsic to physics... and while there is much more you can do on the CPU than traditionally used in games, when you've been immersed in an environment that extensively and intrinsicly (to reuse that word) implements physics in every aspect (which would require hardware accel.) you would never want to go back.

I was a bit dissapointed in teh batman implementation in the end tho... the smoke really added to the immersion for me, loved the tearable cloth spider webs, etc. that really added to the atmosphere in a way I've not seen in games in a long time... but...

After the first room there was very little that used proper physics in the environment, most walls were unaffected by damage compared to the way they could be teared up early on, a lot of effects after the initial couple of rooms were nerfed as well... I went upto some sparking wires and watched the sparks bouncing off the floor, etc. and was like cool but then when I walked in among the sparks they just clipped through me :( then I walked under some dripping water and it just went through me too :( and then I noticed most of the water effects were hard implemented in the map editor and weren't using any kinda physics... dissapointing... later on I found some shatterable glass... but despite the glass plane breaking up there was no shard effects, etc. again dissapointing...
 
Last edited:
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
^^^ The point you make about batman ties in nicely with the whole issue of propriatory formats. Why would a developer spend masses of time on integrating complex features which can only be accessed by a small proportion of the end users? This is probably why the physics effects are limited to just a few rooms, or just a few objects.

A universal physics format (like the one suggested in the OP) would unlock the door to innovation, so to speak. It would allow developers to add really cool effects, safe in the knowledge that everyone will be able to appreciate them.
 
Soldato
Joined
17 Nov 2005
Posts
3,583
^^^ The point you make about batman ties in nicely with the whole issue of propriatory formats. Why would a developer spend masses of time on integrating complex features which can only be accessed by a small proportion of the end users? This is probably why the physics effects are limited to just a few rooms, or just a few objects.

A universal physics format (like the one suggested in the OP) would unlock the door to innovation, so to speak. It would allow developers to add really cool effects, safe in the knowledge that everyone will be able to appreciate them.


Agreed, if everyone can use the effects even if it scales with cpu and gpu then it is worth them doing it and would push up sales of hardware if the physic's effects were worth it.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
Even with a well opptomised CPU physics engine you can't really plan to throw around more than 200 primitive physics objects (RBs) on the average gamer's hardware and thats not even taking into account soft body/cloth effects, fluids, etc... if you implemented ejecting brass (a fairly simple effect that can actually increase the immersion in a firefight quite a bit) you'd only need 3-4 enemy firing at ~15 rps and even with a time to live of only a couple of seconds you've already hit the limit of your physics payload... change that over to the GPU or other hardware acceleration and you can pretty much up that atleast 4-5x minimum - probably 10+x.

Tho City of heros handled that one quite nicely - partly by cycling the ejecting brass pool - but it didn't leave a huge amount of space for other physics effects.
 
Last edited:
Back
Top Bottom