One of our biggest concerns for this big benchmark was how we were going to configure each CPU. Normally we’d just test both using low latency dual rank DDR4-3200 CL14 memory, for an apples-to-apples comparison. But we weren’t convinced that’d make the most sense for these two CPUs, given AMD and Intel are pitching them as the ultimate gaming processors.

As a gamer, you would only buy the Core i9-12900K because you want the best of the best, at least from Intel. Whereas you’d buy the Ryzen 7 5800X3D because you want the best gaming CPU the AM4 platform has to offer. With each CPU claiming to be the best for gamers, we thought it’d make the most sense to test them with the best possible memory configuration.

Personally, after having tested all four configurations for our day-one 5800X3D review a week ago, we found the DDR4-3800 vs DDR5-6400 comparison the most interesting, so that’s what we’ve gone with. This means all the testing you’re about to see was gathered using the Core i9-12900K on the MSI Z690 Unify motherboard using G.Skill’s Trident Z5 DDR5-6400 CL32 memory, while the Ryzen 7 5800X has been tested using DDR4-3800 CL16 memory with the MSI X570S Carbon Max WiFi.

This is the fastest memory we can use with the Ryzen 7 while running at a 1:1 ratio with the FCLK. The memory used to test the Ryzen 7 processor costs $265, while the DDR5 memory used with the Core i9 currently costs $480. The other test system notes to be aware of is that Resizable BAR is enabled using the GeForce RTX 3090 Ti.

Benchmarks

Starting with Valorant, using medium quality settings given this is a competitive title, the 5800X3D pushes the RTX 3090 Ti to 668 fps on average at 1080p making it 15% faster than the 12900K, using DDR5-6400 memory, or 32% faster when comparing the 1% low data which is a significant performance improvement, though the Core i9 processor did allow for well over 300 fps at all times in our testing.

Similar margins are also seen at 1440p and 4K because we’re heavily CPU limited in this title when using the RTX 3090 Ti with medium quality settings and this will likely be the case with most higher-end GPUs.

Fortnite also favors the 5800X3D at 1080p and 1440p, though it does more so for the average frame rate than the 1% lows. Here the Ryzen 7 processor was up to 11% faster and it’s not until we jump up to the 4K resolution that the margin is neutralized and now both CPUs are limited to the same level of performance.

Next we have Call of Duty Warzone, and this title appears to play better with the higher clocked Core i9-12900K which pushed the average frame rate at 1080p 17% higher with a 21% boost to 1% lows and these margins were roughly the same at 1440p. Even at 4K, the Core i9 processor enjoyed a reasonably significant performance advantage and while it is difficult to get accurate numbers with these multiplayer titles, this data is based on a 3-run average.

Moving on to Assetto Corsa Competizione, this one heavily favors the 5800X3D, boosting performance at 1080p and 1440p over the 12900K by around 24%, though only up to 13% for the 1% lows. Then by the time we reach 4K the game becomes entirely GPU limited using the RTX 3090 Ti with the medium quality settings.

Another title we see constantly requested for CPU benchmarking is Cities Skylines, and while we have included it in the past, it’s always proven to be a terrible title for benchmarking due to its single core nature which creates a heavy bottleneck. But due to popular request we thought we’d give it another shot. For testing we’re using a large city from a save game we downloaded for testing. Looking at CPU utilization, it only looks to heavily utilize a single core, with an additional 2-3 cores utilized to the tune of 20-30% with the 12900K. As a result the game is heavily CPU limited, and the massive L3 cache of the 5800X3D doesn’t help it pull ahead of the 12900K. It could certainly help relative to the 5800X, but we don’t have that data just yet. When it comes to Cities Skylines performance, these two CPUs are a close match and both failed to hit 60 fps, though you don’t need high frame rates to enjoy this game.

StarCraft II only utilizes a single core and while frame rates are much higher, the game is heavily CPU limited. The 12900K was 5% faster but both processors pushed over 200 fps in our mid-game replay benchmark.

Apex Legends runs slightly better on the 5800X3D, offering a small performance advantage over the 12900K, even at 4K. We’re looking at a 4% increase at 1080p, 6% at 1440p and 5% at 4K. Certainly not a difference you’ll notice, but the AMD CPU is technically faster here.

Unlike Apex Legends, Dying Light 2 isn’t a game where you need hundreds of frames per second, so either CPU works well despite the fact that the 5800X3D was a little faster.

Rainbow Six Siege is a competitive game, so for this test we opted for slightly dialed down quality settings with the very high quality preset opposed to ultra. We’re only GPU limited at 4K and even there frame rates exceeded 200 fps. At 1080p the 5800X3D was 3% faster, but then 3% slower at 1440p. You could claim margin of error, but after numerous 3-run averages the AMD CPU was consistently a little faster at the lower resolution and then a little slower at the higher resolution. Interestingly, these same performance trends were also seen in Rainbow Six Extraction using the Vulkan API.

We also tested Battlefield 2042, but here are the Battlefield V results because a lot more players are testing that game. Also, 2042 sucks and we were booted from our test server several times to get this data. For Battlefield V fans, either CPU will serve you well, performance was basically identical.

F1 2021 was tested using the high quality preset and we’re looking at well over 300 fps at 1080p. The 5800X3D was technically faster, but we’re talking about a small 3% boost to the average frame rate and a 7% increase in 1% lows. At 1440p the 12900K jumped ahead by a mere 3%, and at the GPU limited 4K resolution performance was identical.

In Halo the 12900K was up to 6% faster, but performance overall was much the same. At 1440p we’re looking at over 150 fps on average with just a few frames separating each CPU. Even at 4K, frame rates remained above 100 fps.

It’s a similar story in Red Dead Redemption 2 using the high quality settings. Performance is identical with just 1-2 fps in it at 1080p and nothing at 1440p and 4K.

The Outer Worlds is an Unreal Engine 4 game which typically favors Intel and Nvidia hardware, but here the 5800X3D does very well. Impressively, at 1080p the Ryzen CPU was up to 17% faster, seen when looking at the 1% lows, but even the average frame rate was still 14% higher. Even at 1440p, the 5800X3D was comfortably out in front when looking at the 1% lows, where it was 16% faster. Oddly, the 4K data still favored the Ryzen processor, where it offered up to 10% more performance and a great result for the cache-heavy 5800X3D.

In Death Stranding, the 12900K limited performance at 1080p to 212 fps, making the 5800X3D 13% faster and this margin was only slightly reduced to 10% at 1440p. Then at 4K we’re completely GPU bound, wrapping things up at 147 fps.

Performance Summary

We’ve looked at a little over a dozen of the games tested, but with 40 in total, there’s a lot more data to go over, so let’s take a look at the breakdown graphs covering 1080p, 1440p and 4K resolutions. Starting with the 1080p results, we see that the 5800X3D was just 1% faster on average, but that’s an impressive result given the Ryzen processor is using DDR4-3800 CL16 memory, while the 12900K was paired with much more expensive DDR5-6400 CL32 memory. The 5800X3D enjoyed big wins in ACC, Valorant, The Outer Worlds and Death Stranding, while it was 14% slower in Hitman 3 and Warzone.

It’s worth noting that of the 40 games tested, 60% of them saw a margin of 5% of less in either direction, meaning performance was very similar for the majority of the games tested, hence the 1% margin overall. Moreover, there were just 11 games where the margin was 10% or greater, and remember we’re using an RTX 3090 Ti, often with dialed down quality settings, so overall these two flagship gaming CPUs are typically evenly matched.

Increasing the resolution to 1440p doesn’t change the margin. The 5800X3D was ~1% faster overall. However, we’re now looking at just half a dozen instances where the margin extended to double digits with 29 games where the margin was 5% or less. ACC was an outlier again, favoring the 5800X3D by a 26% margin, while CoD Warzone was the worst result for AMD.

At 4K, Valorant and The Outer Worlds were the best titles for AMD, with CSGO being the only game where the margin extended to a double-digit loss. Overall, the 5800X3D was just 1% faster, which means the performance was about the same overall. We saw a margin of 3% or less in 26 of the 40 games tested, meaning 65% of the time we were looking at near identical performance at 4K, where we’re generally GPU bound.

What We Learned

As we discovered in our launch review, the Ryzen 7 5800X3D is indeed a great CPU for gaming, offering big gains over the original Zen 3 models to match Intel latest and greatest. In our day-one review we only featured 8 games (with various memory configurations) and from that sample the 12900K paired with DDR5-6400 memory was 2.5% faster. Today we have 40 games and the margin has narrowed to a single percent delta in AMD’s favor, which for all practical purposes means these flagship parts deliver comparable gaming performance.

Based on that, we expect the 5800X3D to be ~7-8% faster than the 12900K if they were both using the same DDR4 memory, which is not a massive margin, but the Ryzen 7 part can offer the superior performance overall for that match up. This also means that the Ryzen could be around 10% faster when compared to the Core i7-12700K (arguably Intel’s best gaming value CPU), so perhaps that’s something we should be running soon. On that note, we’re also keen to add the Ryzen 7 5800X to this 40 game benchmark comparison as it should be interesting to see when and where that big L3 cache comes in handy. There are certainly a few interesting spin off comparisons we can make, so do let us know what you’d like to see in the comments section.

As for the Ryzen 7 5800X3D vs. Core i9-12900K battle, our day-one review summed it up the best and although we’ve tested five times more games here, the conclusion goes unchanged: for those seeking maximum gaming performance, but don’t want to go stupid with pricing, and care a bit about power efficiency, the 5800X3D seems like the obvious option. The advantage of the 12900K is that the LGA 1700 platform will support at least one more CPU generation, whereas this is the end of the road for the AM4 platform. You could also argue that the 12900K can be overclocked, but honestly you’re not getting much more out of this chip and it’s already a handful to cool at stock.

Both of these CPUs are targeting gamers who are seeking the best of the best (the Core i9 will be the superior CPU for productivity). We’ve mentioned before how the 12th gen Core i7 range makes more sense for gamers, but make no mistake the extra 20% L3 cache offered by the Core i9 does make a difference when it comes to ultimate gaming performance.

And if you’re after the very best gaming performance you’re unlikely to settle for DDR4 memory with the Core i9-12900K, you would with the 12700K, but not the i9. Likewise, you’re not going to use DDR4-3200 or slower with the 5800X3D, DDR4-3800 seems more fitting which is why we tested these configurations. The Core i7-12700KF is more affordable than the 5800X3D, so in terms of value that part should match up well if using DDR4-3800 memory. That’s probably how we’ll run that comparison as we’re not sure pairing it with $480 DDR5 memory would be the right move. That will also provide us with apples to apples data should we test the 5800X. To wrap this one up, the 5800X3D is very impressive, but so is Intel’s Alder Lake series, so you have plenty of great options on both sides which is great news for consumers.