DDR3-1333 Speed and Latency Shootout
Super Talent PC3-10600 CL8
With a model name that actually means something and a brand known for extreme overclocking capabilities, Super Talent's part number W1333UX2G8 2x 1 GB dual-channel kit is an easy choice even before we consider whether or not it's the "best" choice.
This is the same brand that rushed DDR3-1600 modules capable of overclocking beyond a 2 GHz data rate into the market before most competitors had even produced DDR3-1333. On the other hand, medium rated timings of 8-8-8-18 at an extra-high 1.80 volts instills far less enthusiasm for these "mid-speed" parts. Only testing will prove whether or not these can live up to Super Talent's great overclocking reputation.
Super Talent's SPD table doesn't include any DDR3-1333 (667 MHz clock speed) value, and the modules are instead electronically labeled as DDR3-1066 parts. This means that most systems will automatically configure DDR3-1066 values.
Super Talent is the only brand in this comparison to provide Intel XMP SPD extensions, which work like EPP (Enhanced Performance Profiles) familiar to DDR2 enthusiasts by allowing select boards to automatically configure higher-than-standard voltage for an overclocked setting. In this case, Super Talent allows its DDR3-1333 to be automatically overclocked to DDR3-1600 at an incredibly high 2.00 volts.
Current page: Super Talent PC3-10600 CL8
Prev Page PDP Patriot Extreme Performance PC3-10666 Low Latency Next Page Wintec AMPX PC3-10600Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
dv8silencer I have a question: on your page 3 where you discuss the memory myth you do some calculations:Reply
"Because cycle time is the inverse of clock speed (1/2 of DDR data rates), the DDR-333 reference clock cycled every six nanoseconds, DDR2-667 every three nanoseconds and DDR3-1333 every 1.5 nanoseconds. Latency is measured in clock cycles, and two 6ns cycles occur in the same time as four 3ns cycles or eight 1.5ns cycles. If you still have your doubts, do the math!"
Based off of the cycle-based latencies of the DDR-333 (CAS 2), DDR2-667 (CAS 4), and DDR3-1333 (CAS8), and their frequences, you come to the conclusion that each of the memory types will retrieve memory in the same amount of time. The higher CAS's are offset by the frequences of the higher technologies so that even though the DDR2 and DDR3 take more cycles, they also go through more cycles per unit time than DDR. How is it then, that DDR2 and DDR3 technologies are "better" and provide more bandwidth if they provide data in the same amount of time? I do not know much about the technical details of how RAM works, and I have always had this question in mind.
Thanks -
Latency = How fast you can get to the "goodies"Reply
Bandwidth = Rate at which you can get the "goodies" -
So, I have OCZ memory I can run stable atReply
7-7-6-24-2t at 1333Mhz or
9-9-9-24-2t at 1600Mhz
This is FSB at 1600Mhz unlinked. Is there a method to calculate the best setting without running hours of benchmarks? -
Sorry dude but you are underestimating the ReapearX modules,Reply
however hard I want to see what temperatures were other modules at
a voltage of ~ 2.1v, does not mean that the platinum series is not performant but I saw a ReapearX which tended easy to 1.9v(EVP)940Mhz, that means nearly a DDR 1900, which is something, but in chapter of stability/temperature in hours of functioning, ReapearX beats them all. -
All SDRAM (including DDR variants) works more or less the same, they are divided in banks, banks are divided in rows, and rows contain the data (as columns).Reply
First you issue a command to open a row (this is your latency), then in a row you can access any data you want at the rate of 1 datum per cycle with latency depending on pipelining.
So for instance if you want to read 1 datum at address 0 it will take your CAS lat + 1 cycle.
So for instance if you want to read 8 datums at address 0 it will take your CAS lat + 8 cycle.
Since CPUs like to fill their cache lines with the next data that will probably be accessed they always read more than what you wanted anyway, so the extra throughput provided by higher clock speed helps.
But if the CPU stalls waiting for RAM it is the latency that matters.