Nvidia is reportedly preparing to refresh its Turing range of graphics cards this year with a higher-spec version of the GDDR6 memory they started out with. Originally the RTX 20-series GPUs were paired up with Micron’s GDDR6 memory, with a peak speed of 14Gbps. There are some cards from Nvidia’s partners using GDDR6 sourced from Samsung’s fabs, but that’s still sporting 14Gbps video memory.
The fresh rumour is that Nvidia will ship RTX 20-series cards with the highest-spec version of GDDR6 available, which would be running at 16Gbps and offer much higher memory bandwidth than the current graphics cards on offer from the green team. Micron has been able to overclock its memory to 20Gbps, but that might be asking a bit much right now.
With AMD’s Navi GPUs set to launch later on this year you can bet your last red cent that Nvidia will want to have something to say about its own graphics cards come the release of the next-gen Radeon cards, and a refreshed line of Turing GPUs makes sense from both a competitive marketing and a timing stance. With AMD’s Navi cards set to launch before the end of September, that would make it around a year since the RTX 2080 launched alongside the RTX 2080 Ti.
The rumour has come from RedGamingTech’s latest video, where it claims a source has given it the scoop. In the video it says Nvidia is “planning to launch a series of Turing GPUs with 16Gbps memory, which is a significant uptick in clockspeed form the 14Gbps we currently have.”
RGT also makes the point that this is consistent with the Pascal refresh which saw Nvidia boosting the GDDR5/X clockspeeds of the GTX 1060 and the GTX 1080. Though those updated versions barely even saw the light of day, and didn’t really offer too much in the way of a performance boost for the old 10-series cards.
Read more: These are the best graphics cards around today
There are no other specifications given for the refreshed Turing cards, however, but it’s not impossible that Nvidia might offer some other minor tweaks to the GPUs. That said, there’s not really a huge amount of wiggle room in the current silicon stack, or even in the nomenclature Nvidia has in place.
We already have an RTX 2080 Ti, but there is the potential for a consumer-focused card to be launched that sports the full TU102 GPU with 4,608 CUDA cores inside it, like the Titan RTX. But if this is going to be Nvidia’s response to Navi then it doesn’t make sense to make any more higher-spec cards which will inevitably cost far more than any Navi GPU set to hit the market this year.
The TU104 used in the RTX 2080 does have some room to grow, with the full GPU offering 3,072 CUDA cores as opposed to the 2080’s 2,944, but what the hell do you call a card hovering between the RTX 2080 and the RTX 2080 Ti?!
It’s also possible we could see an RTX 2070 Ti using a cut-down TU104 GPU, as we’ve seen Nvidia mix its silicon between card ranges in the past… there have been seven different GPUs under the GTX 1060 name after all.
But the only GPU which actually could benefit from a slight tweak would be the RTX 2060, which uses a cut-down version of the TU106 GPU also used by the RTX 2070. An RTX 2060 Ti with another couple of SMs enabled could deliver 2,176 CUDA cores, and with the 16Gbps GDDR6 would then have 384GB/s of memory bandwidth.
Put that out at the original RTX 2060’s $350 price tag and you could be looking at a competitor for AMD’s Navi cards right there, though if the spurious rumours of a Radeon RX 3080 offering RTX 2070 performance for $250 are true even that might not be enough for Nvidia to hold off the burgeoning Radeon competition.
Though I really can’t see AMD looking to undercut Nvidia by that sort of margin – putting out a $250 card which performs as well as a $500 card might please the community, but not the investors and shareholders. It’s just not a particularly smart business model.
It only makes sense if the profit margins on the new GPUs are already pretty high. And if the 7nm process allows AMD to make its GPUs that cheaply then why would it have not released the Radeon VII with a lower price tag than the RTX 2080 it was struggling to keep up with?
But what of the GTX 16-series? Well, only one of those cards actually uses GDDR6, the GTX 1660 Ti. It’s using 12Gbps memory at the moment, with 288GB/s of memory bandwidth, and it would probably be relatively straightforward to just drop the existing 14Gbps memory onto the standard GTX 1660 Ti boards to give them a bump up to 336GB/s.
The GPU market is going to be a fascinating place come the tail end of this year. With Navi hitting the market and it being a year since the Turing cards first launched, there is sure to be some serious head-to-head silicon duels ahead of us. But, as always, you can take this with a heavy dose of salt right now, because we are still grinding away at the rumour mill.
- Nvidia’s GTX 1650 graphics card rumoured to follow GTX 1660 Ti in late March
- Nvidia teases “something super” – Super Editions to combat AMD’s Navi maybe?
- Nvidia’s RTX 2070 Super could cost $599… potentially saving AMD’s 5700 XT
- Nvidia GTX 1660 and GTX 1650 GPUs hitting shelves March 15 and April 30
- Nvidia’s 12nm vs AMD’s 7nm GPU efficiency is “incomparable”