• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
Nasha, Broadwell-E was not but free for a good while on release as well, why should Intel be given a pass for this but AMD are not allowed the same grace?

Too many people here with short memories and selective hearing

Exactly.


EDIT: I don't know the details, but I read that Hyper-threading was problematic on release, as I think was ddr4 with Haswell-e, Regardless the point still stands.
 
Last edited:
Soldato
Joined
13 Jul 2004
Posts
20,079
Location
Stanley Hotel, Colorado
I don't think AMD's aim is to drive technology at the Ultra high expensive end. their focus is bringing the existing Ultra high end down to consumer prices. Thus raising the bar significantly as it's available to the average joe. If only a small subset of people have access to the top technology then Dev wont bother to improve their games as much as they should as most people wont be able to use it.

I agree with games they want the most customers possible and not just in the West now but China etc, especially in the DLC era. Not many people buy a graphics card over £1000 when they could pay 5x less and be mostly just fine at a reasonable res. Obviously theres the elite high end of a market demanding more but this is not what Vega appears to be to me, as everyone said AMD has a big gap in their range vs Nvidia. Thats where its going to be placed to compete.
Zen put AMD back in the picture they had been absent for a while and I imagine the first measure on the GFX side will land similarly.

Intel does 22 core cpu and all sorts. I looked at a lot of the range a while back, server chips but really impressive very specific design needs usually to address limited space.
I expect Zen to develop and address that part of their sales also but it wasnt the first release of that technology. To the mainstream public the 8 core chips are very expensive components but also justifiable, so I'd guess the same happens with GFX
 
Soldato
Joined
7 May 2006
Posts
12,192
Location
London, Ealing
giphy.gif
LOL
 
Soldato
Joined
28 Oct 2011
Posts
8,405
I get what you mean but it costs resources to do that and AMD think they can make more money by targeting a different consumer group. Plus 5% faster than a 1080 for say 20-30% less £ would be :eek:!!

I don't think AMD's aim is to drive technology at the Ultra high expensive end. their focus is bringing the existing Ultra high end down to consumer prices. Thus raising the bar significantly as it's available to the average joe. If only a small subset of people have access to the top technology then Dev wont bother to improve their games as much as they should as most people wont be able to use it.


I think this exactly what AMD have been doing for a while, there's two different markets as you suggest, there's the people who always want the 'fastest' or 'best' and will pay the premium but get less bank for buck and there's a larger group who'll buy probably buy more AMD cards as they're generally better value for money. Then you have within those groups Gsyncer's and Freesyncer's, who unless they've lost their marbles are going to GPU's made for their monitors. I don't think it's a bad situation at all, and I think if AMD's strategy was to fight NV in a never-ending war for 'the crown' it wouldn't make any sense.

If I had an uber system i'd buy NV, but like most I don't. If I were to decide to build again from scratch i'd go Ryzen/Freesync/AMD GPU because it would be significantly cheaper, and real world I wouldn't notice the difference even if there was any.
 
Soldato
Joined
18 May 2010
Posts
22,376
Location
London
I think this exactly what AMD have been doing for a while, there's two different markets as you suggest, there's the people who always want the 'fastest' or 'best' and will pay the premium but get less bank for buck and there's a larger group who'll buy probably buy more AMD cards as they're generally better value for money. Then you have within those groups Gsyncer's and Freesyncer's, who unless they've lost their marbles are going to GPU's made for their monitors. I don't think it's a bad situation at all, and I think if AMD's strategy was to fight NV in a never-ending war for 'the crown' it wouldn't make any sense.

If I had an uber system i'd buy NV, but like most I don't. If I were to decide to build again from scratch i'd go Ryzen/Freesync/AMD GPU because it would be significantly cheaper, and real world I wouldn't notice the difference even if there was any.

Thats pretty much what I've done. I've upgraded from a 3570k to a Ryzen 1700. Moving to a true 8 core CPU (with support for 16 threads to boot) makes more sense than a 7700k to me. Plus it was cheaper. Now just waiting on Vega and will pickup a FreeSync monitor. My existing monitor is from 2008 and is very bad for lag and tearing.

The Ryzen is pushing my 970 even at stock more than the 3570k and as a consequence I've had to limit the frames to 60 in BF1 just to counter the increase in monitor tearing and jank. :o

Waiting in earnest for Vega! I'm also not happy about Nvidia's DX12 performance so would happily lose Game Works for better DX12 and Vulkan support.
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,584
Location
Greater London
Thats pretty much what I've done. I've upgraded from a 3570k to a Ryzen 1700. Moving to a true 8 core CPU (with support for 16 threads to boot) makes more sense than a 7700k to me. Plus it was cheaper. Now just waiting on Vega and will pickup a FreeSync monitor. My existing monitor is from 2008 and is very bad for lag and tearing.

The Ryzen is pushing my 970 even at stock more than the 3570k and as a consequence I've had to limit the frames to 60 in BF1 just to counter the increase in monitor tearing and jank. :o

Waiting in earnest for Vega! I'm also not happy about Nvidia's DX12 performance so would happily lose Game Works for better DX12 and Vulkan support.
Yeah. IMO you are making the most sensible decision for your needs. I will eventually be going to a full AMD system also. But might take a while as I am waiting on 7nm Ryzen and Freesync 2 monitor before going Vega.
 
Soldato
Joined
28 Oct 2011
Posts
8,405
Thats pretty much what I've done. I've upgraded from a 3570k to a Ryzen 1700. Moving to a true 8 core CPU (with support for 16 threads to boot) makes more sense than a 7700k to me. Plus it was cheaper. Now just waiting on Vega and will pickup a FreeSync monitor. My existing monitor is from 2008 and is very bad for lag and tearing.

The Ryzen is pushing my 970 even at stock more than the 3570k and as a consequence I've had to limit the frames to 60 in BF1 just to counter the increase in monitor tearing and jank. :o

Waiting in earnest for Vega! I'm also not happy about Nvidia's DX12 performance so would happily lose Game Works for better DX12 and Vulkan support.


Very sensible upgrade, i'm going to wait for the dust to settle with Vega and Ryzen and for the dark nights also, as I don't game that much in Summer. I'm going to need Mobo/CPU/GPU/ Freesync/RAM (this build was 2012 vintage with one GPU upgrade), but at least my case and drives are up to scratch!

:)
 
Associate
Joined
30 May 2016
Posts
620
As much as I don't want to spoil the mood, I will say this again (like I have several times before): Vega is targetting deep learning, not gaming. It will be a considerable improvement over Polaris, but I doubt it will blow away Pascal.

AMD lacked the resources to come up with two lines of GPUs (gaming and professional) and therefore Vega gaming cards are essentially cut-down versions of the deep-learning MI-25 that have reduced HBM (due to its cost) and tiled rasterization.

All the pieces are falling into place: they finally have a competitive CPU in Naples coming up, a GPU architecture that is suited (HBCC, 2x Packed math) to computation with massive data, and accompanying software (ROCm) that not only competes favourably against CUDA, it also contains an automated migration tool that works well.

AMD gives the example of porting Caffe from CUDA with 99.6% automation - only 200 lines of code from the 55,000 total had to be modified by a code and that just took a week.

AMD is preparing a massive launch of Naples, MI-25, ROCm. This will coincide with frameworks/libraries (TensorFlow/Caffe/Torch) being released for their platform. The cost and performance will be disruptive. They're launching a major attack against CUDA in an all-out war that hopes to steal the market from NVidia's graps: This article describes their strategy.

In terms of gaming though, I expect Vega will be competitive, but it won't be blowing away the Titan Xp/1080ti/etc...
 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
As much as I don't want to spoil the mood, I will say this again (like I have several times before): Vega is targetting deep learning, not gaming. It will be a considerable improvement over Polaris, but I doubt it will blow away Pascal.

AMD lacked the resources to come up with two lines of GPUs (gaming and professional) and therefore Vega gaming cards are essentially cut-down versions of the deep-learning MI-25 that have reduced HBM (due to its cost) and tiled rasterization.

All the pieces are falling into place: they finally have a competitive CPU in Naples coming up, a GPU architecture that is suited (HBCC, 2x Packed math) to computation with massive data, and accompanying software (ROCm) that not only competes favourably against CUDA, it also contains an automated migration tool that works well.

AMD gives the example of porting Caffe from CUDA with 99.6% automation - only 200 lines of code from the 55,000 total had to be modified by a code and that just took a week.

AMD is preparing a massive launch of Naples, MI-25, ROCm. This will coincide with frameworks/libraries (TensorFlow/Caffe/Torch) being released for their platform. The cost and performance will be disruptive. They're launching a major attack against CUDA in an all-out war that hopes to steal the market from NVidia's graps: This article describes their strategy.

In terms of gaming though, I expect Vega will be competitive, but it won't be blowing away the Titan Xp/1080ti/etc...


Personally I'm starting to think the fact that they made high end gamers go without during the last generation and now they're prioritizing updating the same cards before getting Vega out worries me. As does the lack of any leaked data regarding performance. What we've seen up to now hasn't been impressive. 4k Doom at 60+ isn't anything to get excited about.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,168
They're launching a major attack against CUDA in an all-out war that hopes to steal the market from NVidia's graps

They are competing against an established standard that for the most part has a history of support from nVidia, better documentation than competing standards (which is crucial for software like this) and better documentation than is the industry norm with a name that is known for dropping support, highly varied standards of documentation and tech in this area that goes into obsolescence and never heard of again after 6 months. Best of luck AMD but all out war on that front is just going to ruin them - I can't emphasise that enough if they bring an all out attack to nVidia on this front they will lose and if they bet the boat on it they'll take everything down with them. With a solid hardware/software offering and working over time on growing the reputation for support and a cohesive ecosystem they could make a decent stand in the market.
 
Associate
Joined
8 May 2014
Posts
2,288
Location
france
Nvidia just released Ti and new Titan, they cannot release another high end for the next 6 months, or they will seriously **** off customers.
if Vega was better than the Ti, AMD would have shared more info about it, just to stop ppl from jumping in to the competition.
being faster than 1080 is a given, because if not that would be the mistake that would kill Vega from the get go.
realistic vega performance is about 15-20% lower than the Ti, which is actualy in line with the 4096SP announced compared to polaris, assuming performance scaling is linear with the compute units.
 
Associate
Joined
30 May 2016
Posts
620
They are competing against an established standard that for the most part has a history of support from nVidia, better documentation than competing standards (which is crucial for software like this) and better documentation than is the industry norm with a name that is known for dropping support, highly varied standards of documentation and tech in this area that goes into obsolescence and never heard of again after 6 months. Best of luck AMD but all out war on that front is just going to ruin them - I can't emphasise that enough if they bring an all out attack to nVidia on this front they will lose and if they bet the boat on it they'll take everything down with them. With a solid hardware/software offering and working over time on growing the reputation for support and a cohesive ecosystem they could make a decent stand in the market.

That's what everyone says but by that logic they might as well give up on DX12/Vulkan due to the established DX11, give up on CPUs as the 7700k is faster in gaming, give up on everything.

Their strategy is just about the best that can be done: open software, a porting tool that works, integration of all popular libraries...

Most importantly, everyone's underestimating them. I don't expect Vega to have twice the FPS of 1080ti, but I will not be surprised if MI25 is twice as fast as it's similarly priced competition.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,168
That's what everyone says but by that logic they might as well give up on DX12/Vulkan due to the established DX11, give up on CPUs as the 7700k is faster in gaming, give up on everything.

Those aren't really comparable, DX12/Vulkan aren't at war with DX11 and likewise there is a difference between having an offering that is an equivalent to the 7700K replacement and building into the market and going all out war with Intel all guns blazing trying to take down their CPUs - which again they would lose heavily.
 
Soldato
Joined
13 Jul 2004
Posts
20,079
Location
Stanley Hotel, Colorado
Since its inception, Google has used every type of AI or machine learning technology imaginable. In spite of this, their average gain for improvement per year was only 0.4%. In Google’s first implementation, the improvement due to DL was 7 percentage points better.

wow this is a massive market then. Ive been seriously wondering when AI was ever going to be a thing, like for decades computers have been dull, repetitive


being faster than 1080 is a given, because if not that would be the mistake that would kill Vega from the get go.
I would guess so yea

Early this year AMD decided to get even “closer to the metal” by announcing the “Lightning Compiler Initiative.” This HCC compiler now supports the direct generation of the Radeon GPU instruction set (known as GSN ISA) instead of HSAIL.

As we shall see later, directly targeting native GPU instructions is critical to get higher performance. All the libraries under ROCm support GSN ISA.

The current state of Deep Learning frameworks is similar to the state before the creation of a common code generation backend like the LLVM. In the past, every programming language had its own way of generating machine code. With the development of LLVM, many languages now share the same backend code. The frontend code only needs to translate source code to an intermediate representation (IR). Deep Learning frameworks will eventually need a similar IR for Deep Learning solutions. The IR for Deep Learning is the computational graph.
AMD has developed a runtime framework that takes into account heterogeneous CPU-GPU systems. It is called Asynchronous Task and Memory Interface (ATMI). The ATMI runtime is driven by a declarative description of high-level tasks that will execute the scheduling and memory in an optimal manner.
AMD’s GPU hardware and drivers have also been designed to support GPU virtualization (see: MxGPU). This permits GPU hardware to be shared by multiple users. I will discuss operational aspects of AMD’s offerings in a next article.
The product is called Radeon Instinct and it consists of several GPU cards: the MI6, MI8, and MI25. The number roughly corresponds to the number of operations the card can crank out. An MI6 can perform roughly 6 trillion floating-point operations per second (aka teraflops).

The Radeon Instinct MI6 with a planned 16GB for GDDR5 memory is a low-cost inference and training solution. MI8 with 4GB HBM is designed primarily for inference-based workloads. MI25 is designed for large training workloads and will be based on the soon to be released Vega architecture.[04/03/2017]
Shuttling data back and forth between GPU and CPU is one of the bottlenecks in training deep learning systems. Vega’s unique architecture, capable of addressing 512TB of memory, gives it a distinct advantage.

There’s also a lot more to say about GPU and CPU integration. I’ll briefly mention some points. On the server-side, AMD has partnered with Supermicro and Inventec to come up with some impressive hardware. At the top of the line, the Inventec K888 (dubbed “Falconwitch”) is a 400-teraflop 4U monster. By comparison, the Nvidia flagship DGX-1 3U server can muster a mere 170 teraflops.
 
Status
Not open for further replies.
Back
Top Bottom