News

Linux driver fuels AMD Navi 12 rumors about Nvidia RTX 2080 Super performance

Linux driver files provide a wealth of information about unreleased graphics cards, if you know where to look. And we… don’t. But thankfully there are people out there who do, and one 3DCenter forum member has uncovered what looks to be the memory interface details behind the upcoming AMD Navi GPUs… the Navi 12 and Navi 14. That could tell us a lot about where the two new graphics chips are going to line up in the Radeon GPU stack.

And this has, in turn, led to a lot of speculation over the weekend about what this means for the graphics cards that will be built around these two slices of 7nm silicon, and some interesting conclusions have been made. Some sensible, and some rather wishful…

The driver-diving has uncovered references to the differing sdp_interfaces of the Navi 10 and 12 GPUs to those of the Navi 14 GPU. They indicate parity on the part of Navi 10 and 12, with the Navi 14 chip having fewer. Nightmare… those sdp_interfaces, eh?

So, what is an sdp_interface? The 3DCenter forumite at the centre of this story, Berniyh, first posted the discovery from inside the Mesa 3D Library and followed up with their take on what the listing could actually mean. From a HotChips chat about Raven Ridge last year the sdp_interface is understood to be a scalable data port, and the number of interfaces seemingly having a direct bearing on the width of the memory bus used on the chip.

With both the Navi 10 and 12 GPUs having 16 sdp_interfaces that would then seem to indicate a 256-bit memory bus, and as the Navi 14 entry lists just 8 sdp_interfaces that would suggest a 128-bit memory bus for the lower spec GPU.

Potential AMD Navi 14 GPU layout

The Navi 10 GPU design has four 64-bit memory controllers to make up the 256-bit aggregate memory bus, and it makes sense then for the Navi 12 GPU to match that layout; based on that you’d then expect the Navi 14 to sport just a pair of 64-bit memory controllers.

We’ve heard that the Navi 14 chips will have 1,536 RDNA cores inside them, and it would make sense then that those would be spread out across a single Shader Engine, with the two memory controllers either side of a 12 dual compute unit (DCU) cluster, offering the 24 discrete compute units (CU) needed to make up the touted total core count.

But the Navi 12 GPU rocking the same 256-bit memory bus as Navi 10 has fuelled speculation that this really is going to be the promised ‘Big Navi’ chip designed to take on the might of the best Nvidia RTX cards. 3DCenter has led this by speculating that we’d be looking at a 350 – 400mm2 chip with up to 64 compute units inside it for a grand total of 4,096 RDNA cores.

Potential high-end Navi 12 GPU

That would need AMD to squeeze 16 compute units inside each of the four distinct compute engines that make up a Navi GPU design using four 64-bit GDDR6 memory controllers. That would lead to them sharing a lot of buffers, L1 cache, and rasterizers between each of the DCUs, which doesn’t sound too efficient to us. But there has also been speculation that 16 sdp_interface figure could also signify a 2,048-bit HBM2 bus too, in which case all bets are off about how the GPU is going to be laid out…

I’m already on record about how unconvinced I am about Navi 12 being the high-end ‘Nvidia Killer’ GPU that many Radeon fans have been hoping for, and honestly none of this has really changed my position. It is absolutely possible for Navi to “work with HBM effectively,” as AMD’s David Wang told me a couple of years back, but strapping the expensive high bandwidth memory and high bandwidth cache controller to a consumer graphics card still seems like overkill when you look at how much extra performance that will net you for the extra price.

It’s something which limited Vega in gaming terms, and I don’t see why AMD would do the same to Navi – what is very specifically a gaming GPU architecture – when GDDR6 offers a huge amount of memory bandwidth to play with.

Mainstream Navi 12 speculation

And it simply having a 256-bit bus, matching that of the RX 570, RX 580, and RX 590 – albeit with much higher bandwidth – doesn’t seem beyond the realms of possibility for a mid-range $200 – $250-odd RX 5500 GPU with 32 CUs and a maximum of 2,048 RDNA cores.

Then there’s the naming scheme. WCCFTech has published one of its exclusive articles over the weekend promising that there’s going to be 5300M and 5500M GPUs popping up soon, and with HP announcing an RX 5300XT discrete GPU, and the RX 5500 already appearing in GFXBench listings the lineup seems clear. My money’s on the Navi 14 GPU dropping into the later RX 5300-series and Navi 12 hitting the RX 5500-series coming in October.

AMD Radeon graphics card

Traditional AMD GPU naming isn’t necessarily based upon size and scale of the chip, but historically on the order that it gets released. Normally the Vega 10 or Navi 10 would be the larger GPU in the lineup, with the 12 and 14 options being the smaller iterations, but that is generally only because they follow after the flagship chips. So that doesn’t discount AMD’s Navi 12 being a high-performance slice of graphics silicon, but it would be odd for the company to go with the RX 5700-series first and follow up with a Navi 12-powered RX 5900 so soon.

But it’s all still speculation at the moment, and we won’t know the final specs of the next-gen Navi graphics processors until AMD spills the beans about what it’s post RX 5700-series plans are. Are we looking at the end of Nvidia’s high-end dominance with big Navi 12 GPU launching pre-Christmas, or is Navi 12 simply the next price-performance hero from the red team designed to win in the mainstream? I know what my money’s on, but I’ve been wrong before…

Buy NowAMD tile

PCGN

Similar Posts: