As 2020 draws to a close, the matter of “which new $500+ GPU is better” has become moot for most prospective PC gamers. Nvidia had a go at it starting in September with the impressive RTX 3000 series, offering on-paper value that blew its RTX 2000 series straight into obsolescence. But so far, its three RTX 3000 models have suffered from a mix of low supply and savvy scalpers scooping up such scant inventory.
This changes the tenor of any conversation about AMD’s new RDNA 2 line of GPUs. In a more stable supply-and-demand universe, I’d be careful to warn buyers about the pros and cons of $500+ GPUs made by either manufacturer this year—each GPU has its own clear victories. That’s already good news for AMD’s two new cards going on sale this week, since it has been years since the “Red Team” has been this competitive with Nvidia.
With no clear indication that AMD will handle Radeon RX 6800 ($579) and Radeon RX 6800XT ($649) supplies any better than Nvidia and its RTX 3000 series, the verdict is a bit wacky. Your best option for the rest of 2020, honestly, is whatever you can actually purchase at a reasonable retail price. These cards are duking it out enough to give either side something in the way of future-proofed gaming performance at high-but-fair prices. If you’re incredibly eager to upgrade to this tier and see anything in stock this year, outside of the worst resellers, close your eyes and buy.
Should both sides be sold out for the rest of the year, of course, then picking through the differences may adjust how you set up on-sale notifications for GPUs in the next few months. In that case, AMD may charm you. Across the board, Red Team’s cards have speed to spare, with the $649 RX 6800XT in particular contending well against, if not outright surpassing, the $699 RTX 3080. And either new AMD card might be your no-question champ if you’re looking for VRAM-intensive workloads and 1440p gaming—and if you are unmoved by ray tracing.
I’m sorry, how much L3 cache?!
|AMD Radeon RX 6800XT||AMD Radeon RX 6800||Nvidia GeForce RTX 3080 FE||Nvidia GeForce RTX 3070 FE||Nvidia GeForce RTX 2080 Ti FE|
|Tensor Cores||n/a||n/a||272 (3rd-gen)||184 (3rd-gen)||544|
|RT Cores||n/a||n/a||68 (2nd-gen)||46 (2nd-gen)||68|
|Memory Bus Width||256-bit||256-bit||320-bit||256-bit||352-bit|
|Memory Size||16GB GDDR6||16GB GDDR6||10GB GDDR6X||8GB GDDR6||11GB GDDR6|
|MSRP at launch||$649||$579||$699||$499||$1,199|
The breakdown between Nvidia’s RTX 3000 series and AMD’s RDNA 2 series sees each side emphasize certain specs. VRAM is the most obvious differentiator, with AMD getting a 16GB pool of GDDR6 VRAM into users’ hands at every tier of this year’s line. Nvidia made its own funky wager: less of the same-specced VRAM (8GB) at its $499 RTX 3070, and a little more (10GB) for its $699 RTX 3080 though that’s bumped to a blistering GDDR6X configuration.
Otherwise, the spec showdown between AMD and Nvidia differs on a basis of compute units and clock speeds. AMD’s RX 6000-series cards exceed 2.1GHz, ahead of the 1.7GHz range on the RTX 3000 series, while their stream processor counts pale compared to Nvidia’s own CUDA core counts. That’s not an equivalent comparison, I admit, but in terms of teraflop determination, both of those values are used.
Like the RTX 2000 series before it, Nvidia’s RTX 3000 cards include two types of proprietary cores: one pool dedicated to ray tracing, and the other dedicated to mathematical computation and juggling of Nvidia’s own machine-learning models. AMD says that it has added its own “ray accelerators” to the RX 6000 series’ boards (one per compute unit). In terms of ray tracing-specific computation, AMD describes them as able to calculate “up to four ray/box intersections or one ray/triangle intersection every clock.” This is similar to how Nvidia’s RT cores work, though their RTX 3000 series has upgraded the cores to additionally handle the interpolation of triangle positions (used to efficiently handle curved light trajectories) and to double the prior generations’ handling of triangle intersection rates.
AMD currently has no answer to Nvidia’s tensor cores—and their effectiveness at intelligently upscaling a moving, lower-res image. Instead, AMD reps hinted loudly to something called FidelityFX Super Resolution coming to their latest GPUs at some undetermined point in the future. For now, nothing in the company’s press materials clarifies exactly how this system will work, how it might differ from its existing FidelityFX Sharpening system, or what in the RDNA 2 architecture will be leveraged by Super Resolution.
Weirdly, AMD had little specific to say about one of the more incredible spec figures on its RX 6000-series GPUs unmatched by Nvidia: a whopping 128MB of L3 cache (surpassing the already staggering 64MB of L3 cache found on various AMD Ryzen CPUs). Exactly how current or future games leverage that high-bandwidth chunk of cache, particularly with measures like tile-based or deferred rendering in a game’s juggle of GPU and CPU resources, remains unclear.
Credit: Source link