It isn't the first time that we've heard about the Radeon RX 5300, but it certainly comes a surprise that it took AMD so long to finally launch it.
The Radeon RX 5300 prides itself as an entry-level graphics card that delivers outstanding gaming performance at 1080p, but it's doubtful we'll see it on our list of Best GPUs. The graphics card might be exclusive to OEMs only – the only place we've ever seen the Radeon RX 5300 is inside pre-built machines. As you would expect from a Navi-based offering, the Radeon RX 5300 supports the PCIe 4.0 interface, not that it will benefit from increased throughput since the graphics card is confined to a x8 connection.
In its interior, you'll find AMD's Navi 14 die, the same 7nm TSMC-produced silicon that supposedly lives inside the Radeon RX 5300 XT. Specification-wise, we're looking at 1,408 Stream Processors (SPs) with a game and boost clock up to 1,448 MHz and 1,645 MHz, respectively. On the memory side, the Radeon RX 5300 has to work with 3GB of 14 Gbps GDDR6 memory, communicating across a 96-bit memory bus. As a result, the memory bandwidth on the Radeon RX 5300 maxes out at 168 Gbps.
If the Radeon RX 5300's specifications look familiar to you, you're not crazy. Essentially, the Radeon RX 5300 is a cut-down version of the Radeon RX 5500 XT. Both models share the same shader count. The main differences are the lower clock speeds, less memory and a more restricted memory interface on the Radeon RX 5300.
AMD Radeon RX 5300 Specifications
Header Cell - Column 0 | AMD Radeon RX 5500 XT | AMD Radeon RX 5300 | Nvidia GeForce GTX 1650 |
---|---|---|---|
Architecture (GPU) | RDNA (Navi 14) | RDNA (Navi 14) | Turing (TU117) |
ALUs | 1,408 | 1,408 | 896 |
Texture Units | 88 | 88 | 56 |
Base Clock Rate | 1,607 MHz | ? | 1,485 MHz |
Nvidia Boost/AMD Game Rate | 1,717 MHz | 1,448 MHz | 1,665 MHz |
AMD Boost Rate | 1,845 MHz | 1,645 MHz | N/A |
Memory Capacity | 4GB GDDR6 | 3GB GDDR6 | 4GB GDDR5 |
Memory Speed | 14 Gbps | 14 Gbps | 8 Gbps |
Memory Bus | 128-bit | 96-bit | 128-bit |
Memory Bandwidth | 224 GBps | 168 GBps | 128 GBps |
ROPs | 32 | 32 | 32 |
L2 Cache | 2MB | 1.5MB | 1MB |
TDP | 130 | 100 | 75W |
Transistor Count | 6.4 billion | 6.4 billion | 4.7 billion |
Die Size | 158 mm² | 158 mm² | 200 mm² |
The Radeon RX 5300 has a low power requirement. AMD rates the graphics card with a 100W TBP (total board power), suggesting that it can get away with just a single 6-pin PCIe power connector. The graphics card will be copacetic even inside systems that only have a 350W power supply.
To paint a bigger picture, AMD provided some performance charts on the Radeon RX 5300's product page where the chipmaker pitched the Navi-based graphics card against Nvidia's GeForce GTX 1650. However, AMD didn't specify exactly which variant of the GeForce GTX 1650 it used in the comparison. In case you don't recall, Nvidia has put out four different versions of the GeForce GTX 1650.
At any rate, AMD claims that the Radeon RX 5300 performs anywhere from 18.6% to 56.8% faster than the GeForce GTX 1650, depending on the game. The list of results include popular titles, such as Battlefield 5, Monster Hunter World, Call of Duty: Modern Warfare and PlayerUnknown's Battlegrounds. AMD used a mix of high and ultra settings for the tests. The chipmaker didn't explicitly state the resolution, but since the Radeon RX 5300 is aimed at 1080p gaming, we expect that's the resolution used.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
The GeForce GTX 1650 launched at $149. Assuming that the Radeon RX 5300 isn't OEM-exclusive, the graphics card should compete in the sub-$150 price range.
Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.
-
clutchc Why didn't they go all the way cutting the GPU down and get the TDP to <75W so it wouldn't need a 6-pin at all?Reply -
DonGato Technical competency of Tom's keeps falling.Reply
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Place is becoming a joke, hence vacant comment sections. -
InvalidError
It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.DonGato said:So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Cost-wise, 3.0x16 has been present on GPUs down to $50, so cost isn't really much of a factor. Looking at 4GB vs 8GB RX5500 results, the more likely reason for AMD limiting entry-level GPUs to 4.0x8 is to protect the marketability of 8GB models since the 4GB models would likely match 8GB performance on 4.0x16 instead of being 50% as good on 3.0x8 and ~80% as good on 4.0x8. -
digitalgriffin DonGato said:Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Place is becoming a joke, hence vacant comment sections.
System memory is orders of magnitude slower than vram pcie 3 x16 or pcie4 x8.
Any way this card is a joke at that price. It's slower than a RX570, has less memory, and cost more. It needs a $120 price point at most -
InvalidError
System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.digitalgriffin said:System memory is orders of magnitude slower than vram pcie 3 x16 or pcie4 x8.
System memory does not need to be anywhere near as fast as VRAM to be useful, it only needs to be fast enough to handle less frequently used assets. Using system memory as a sort of victim cache for VRAM is actual orders of magnitude faster than waiting for software to reload assets from storage. -
hannibal This may be the ow end gpu when 570 is not anymore available... Very likely after the next black friday.Reply -
DonGato InvalidError said:It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.....
c'mon mod read what the article said and what i said together.
""...not that it will benefit from increased throughput since the graphics card is confined to a x8 connection. ""
So YES it benefits from the pcie4.0 bc it is the same bandwidth as pcie 3.016x, which i also said.
and you have no way of knowing what it actually costs AMD, the leader in pcie 4.0, to put in an 8x vs a older 16x connection, marketing purposes aside. however, id love to see an internal price sheet . -
InvalidError
I know AMD can put a 4.0x16 controller in a $100 CPU complete with CCD, IOD, 15 layers CPU substrate, IHS and stock HSF. The PCIe 4.0x16 interface cannot be a substantial chunk of that.DonGato said:and you have no way of knowing what it actually costs AMD, the leader in pcie 4.0, to put in an 8x vs a older 16x connection -
digitalgriffin InvalidError said:System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.
System memory does not need to be anywhere near as fast as VRAM to be useful, it only needs to be fast enough to handle less frequently used assets. Using system memory as a sort of victim cache for VRAM is actual orders of magnitude faster than waiting for software to reload assets from storage.
I might be willing to argue this. It's not like you are going straight from the GPU to memory. You are going GPU->CPU memory controller->Memory->CPU memory controller->GPU. Even with a heterogeneous memory architecture that's a lot of overhead.
And 3GB was showing it's age on my 7970 about 3, 4 years back. There's the display buffer, draw buffer, draw call buffer, z buffer. A lot of memory that is required gets eaten up quick. Then you need room for shader programs to be stored.
So your 50GB theoretical throughput has a lot of latency associated with it which ultimately affects speed. -
InvalidError
Latency does not matter much on GPUs since the thread wave scheduler usually knows what data a thread wave will work on and will launch those waves once the data has been pre-fetched as usual regardless of where that data comes from. As long as there isn't so much stuff needing system memory that the GPU runs out of stuff it can do from VRAM faster than it can complete the parts that require system memory, there should be little to no negative impact from using system RAM.digitalgriffin said:So your 50GB theoretical throughput has a lot of latency associated with it which ultimately affects speed.
If you look at the RX5500's benchmarks, in VRAM-intensive places where the 8GB models can do 60-70fps, the 4GB models on 3.0x8 crash to 20-30fps and bounce back to 50-60fps on 4.0x8. It clearly does not mind the higher latency of using system RAM over 4.0x8 and greatly benefits from getting an extra ~16GB/s out of it. Had the 4GB RX5500 had a full 4.0x16, it would likely come out practically as good as the 8GB version for $40 less at least as far as the current crop of VRAM-intensive games are concerned.