Knowledge

Talk:Multi-channel memory architecture

Source đź“ť

1588:
I've also mixed and matched sizes within 3 channels on both a Nehalem i7 and an old junk xeon of some sort, I'm ashamed to admit. The first case works because when 8 slots exist the processor divides them into 2 physical banks which can be selected via DDR{0/1/2/3}_BA pins and logical bank groups per socket addressed via a similar method. The second case works because Intel flat out says they support it and you don't need to dig anything else up, and they've always supported it. The whole "use the exact same matching set of memory everywhere" has to do primarily with the usual BS spread by overclockers and the barrage of idiots who decided to start a "tech blog" for quick ad revenue and free parts but don't bother to do any research or actually try anything weird out... you know, the things that would actually interest people instead of 20 pages of artificial benchmarks. My mismatched size memory is also mismatched in timings and base JEDEC speed, and the timings improved when more was installed via regular old POST mem training. It somehow settled on some weird hybrid that's faster than either type of stick and not directly supported by either. Even more weirdly it
1049:
case of multi-chip processors with memory controllers built in which are fairly new (the Q6600 was an MCM but still had a Northbridge) there is usually a memory controller per-CCD and various striping configurations are possible, although going from 8-way striping on a TR Pro for example to 4x2-way striping doesn't affect overall memory bandwidth as negatively as dropping from 8 to 2 banks does in real world comparisons vs. Ryzens since each CCD is able to fetch from its local memory simultaneously, and the whole mess is really dependent on workload, NUMA-awareness of programs if more than one NUMA node per processor is being used, etc. Additional banks OTOH usually just add to the capacity and on most systems past dual bank 4-channel broadwell force a drop in overall memory speed to the next lower JEDEC level if the second bank is populated (which might not actually affect bandwidth, since CL usually becomes faster along with clock becoming slower and the two are heavily related).
559:
It was now running at above its rated XMP speed and below its baseline CL. Presumably there was a flaw in the motherboard's termination or voltage control logic somewhere and populating the second bank of quad channel "fixed" it. I think on old boards with memory controllers on the northbridge the 2nd bank could be running at a different speed than the first. Usually ECC RDIMMs are matched because the motherboards they're used in don't contain the ability to upclock them past their base speed; server boards are meant to be stable, not get 2fps more out of something (and their memory bandwidth is already high enough that it isn't the limiting factor, anyway). If full system memory wasn't purchased at the time the server was built it'll usually be replaced with more identical memory because it's what was on the vendor qualified hardware list, not because other memory wouldn't work. --
648:
SIMMs?) It does have something to do with interleaving, because on a 32-bit bus, bits 0-7 are stored on the first chip, 8-15 on the next chip, 16-23 on the next, and 24-31 on the next, and 32-39 are back on the first. So the memory is interleaved across 4 chips, with a tiny stripe size. I think this may be how a dual-channel memory controller works too. Regardless, both the old and new way (if they're different) see the same speed increase because each bus cycle transfers more data. I think what we'll see is another eventual consolidation of two chips into one if dual-channel out-paces cranking up the clock rate. So no, I don't think this is anything new, but just because it's an old trick doesn't mean it's ineffective. I wish I could match 8 or 16 chips to get 8x or 16x the bus speed, like Alphas mostly did. --
830:
environments succeed or fail to take advantage of this feature. The wide disparity of benchmark results (as little as about 3% improvement to as much as about 80% improvement, based on the approximate but not clearly stated raw numbers of directly comparable components, and ignoring the sometimes misleading use of chart numbering) makes clear that the identification of these specific environments is vital to understanding whether dual-channel is of any use to a consumer for their particular needs. (That's why I came to this article in the first place.) Can't we find some more objective and more thorough sources for this 4-year-old technology? ~
658:
or 4x2/3 interleave depending on how latency-sensitive the application is. Mr. Bilbo predicted something here since they're now accessing DDR5 as though it were 2x32-bit wide data lines instead of a single 64 bit wide. Not really true dual channel since they made it narrower at the same time, but it's there... then of course you have the 12 channel thing on Epyc 9004 and the newest Xeons, interleaving across a ton of modules for system memory bandwidths higher than all but the priciest consumer GPUs... performance desktops were moving in the right direction and made it up to quad channel memory before taking a nose-dive post-Broadwell-E.
1593:
making the point that this article is extremely oversimplified at best and more like what I'd call completely incorrect in almost every aspect. In reality, you can't have memory pairs in channels 0/1 or in 2/3 that won't run at the same voltages because they share the setting. That's it. Most UEFI doesn't expose it but at least the 2011v3 chips (even non server) support lockstep modes for memory with various pairing / mirroring / interleave methods. It's a complicated topic. I'm afraid I can't condense anything down enough to help aside from the reference link and the others like it you can google from intel and AMD.
1034:
implies multiple memory buses, although older asynchronous DRAM systems could take advantage of limited interleaving on the same bus by sending commands to the next module before reading results back from the previous one. Dual-channel OTOH is a marketing buzzword which means the system can read from two modules faster than from a single one. Note the use of the word "channel" rather than an actual meaningful word. For example, dual-channel would describe a 2-way interleaved system as well as an early dual CPU Opteron which had two completely separate memory buses.
919:
0" and "DIMM 1" of each channel (A and B). In practice, these would normally be labeled as bank 0/1, 2/3, 4/5, and 6/7 or simply as DIMM-0 through DIMM-3. The rest of the paragraph is also needlessly ambiguous -- what it's trying to say is that you can mix modules of different size in a channel, as long as the configuration is mirrored in the other channel. Which is somewhat redundant with stating the need for matching pairs of DIMMs. I'll try to clean it up later if no one gets around to it.
243: 222: 311: 467:
transfer of memory utilizes both chips. This has nothing to do with how many applications are running (or highways :P). So ya, it's kind of like RAID 0 with hard disks, except the stripe size is 64 bits instead of several KB. Hypertransport, on the other hand, allows each CPU to handle 1/n of the memory independently. For this, the memory is divided up into n large chunks. (n = # of CPUs) For this, how memory is allocated to applications
191: 1204:. Please just do this, as it is highly logical so needs no mass discussion. Use "Multi-channel architecture" and do sections for "Dual-channel architecture", "Triple-channel architecture", and "Quad-channel architecture". But PLEASE make sure you sort all the current link-to's to link to the new appropriate section on the new page, eg. "Multi-channel architecture#Dual-channel architecture". -- 1376: 637:
transmited 4 bytes at a time at 2 ns cycles, a bus speed of 500 MHz). Memory is not interleaved in this case. Interleaving was useful in days of DRAM, which required refreshing. It was interleaved because one bank would be accessed while the other refreshed. SDRAM does not need refreshing and this kind of interleaving would offer no gain for SDRAM.
1407:
busses of memory? For a multi-processing application (separate virtual address space for each process), it could be possible for the operating system to allocate paged memory from separate busses for 2 to 4 processes (for double to quad channel memory), but I don't know if there are any operating systems that do this.
1347:
What is the Bottleneck diagram supposed to tell me? That my USB port is faster than my memory? That peripheral data has to go through memory before it goes to a CPU? I don't think it's the right way of visualizing the Bottleneck principle. It does not visualize how fast memory is in comparison to mos
1048:
CPUs have one set of data lines to each channel if you check the pinouts. The only reason to have this configuration at all, since it eats up die space, is to take advantage of multiple sticks of memory to increase speed, and the easiest way to do that is hardware level striping across them. In the
829:
The sole source for this article is currently a whitepaper from two technology companies that stand to benefit from promoting new memory technology. The whitepaper provides an elementary explanation of how dual-channel architecture works, but fails to discuss the question of what kinds of application
657:
Now we're back to MCM mega-chips like TR Pro with 4 memory controllers with 2 DDR4 channels each or Epyc 9004 with 4x3 DDR5 channels which achieve the highest memory bandwidth (but take an intra chip latency hit) by interleaving memory across all 4 controllers. Or they can be run in 2/3x4 interleave
632:
SIMMs in pairs has nothing to do with interleaving. It has to do with data width. For 32 bit CPU, memory, data is accessed 32 bits at a time. But since SIMMs were only 8 bits wide, you always needed to add SIMMs in 4s. 386sx was an exception. It was 32 bit CPU with 16 bit lines. Since it needed
390:
No offense, but I think the graphic is misleading. It represents peripherals (AGP, IDE, and USB) to be as fast as the CPU, which is very wrong. In fact, they are even slower than the memory controller. It would be more correct to represent this as an inverted pyramid. I question the applicability
1587:
The requirements info in this article are completely full of it, or the computer I'm posting from can't exist, since it's an i7-6950x (they're cheap right now) with 4x8GB DDR4 in quad channel mode in one bank and 4x32GB modules in the other bank that are working fine as 160GB of quad channel total.
936:
I'm confused about how the word "bank" is used and it is particularly confusing since the SDRAM chips on the DIMMs themselves each contain 8 banks of DRAM, addressed by the BA ("bank address") pins. My understanding: each memory location is uniquely addressed by (i) channel number, (ii) rank number,
918:
This is highly misleading. First of all, the word "bank" is being used ambiguously to mean a pair of corresponding DIMMs from each channel, which is not technically correct since the term "bank" refers to a single side of a module (paired up for DIMMs). The whitepaper itself refers to these as "DIMM
578:
I think its worthwhile to mention that technologies like XDR and FB-DIMM were created with the idea that the high pincount of DDR was a bad thing and those technologies instead seek to have wide internal busses which serialize data into thin external busses by having more on-chip circuitry. Possibly
466:
As I understand it, the interleaving is done on a much smaller scale. For example, the first 64 bits are stored on one chip, the next 64 bits on the other chip, the third on the first chip, etc. Memory is usually transferred in bigger bursts (are they page-sized bursts?) to the CPU's cache, so one
716:
So, perhaps it should be added by someone who knows the answer. I'd like to know if RAM, in a dual-channel configuration, continues to run at the same clock speed or not. For example, PC-3200 RAM runs at 200 MHz. Would it continue to run at 200 MHz in a dual-channel configuration? It seems like
558:
speed but contains XMP information saying it should run much faster. RAM training at boot determines what speed the mess of memory as a whole will actually run at. I've had the speed go up and CL go down on older memory (i.e. it got faster) after the second bank was populated with faster memory.
462:
Highway is analogy is not complete. Because the toll booth (memory controller) would be only 32 lanes so all those 128 lanes of cars would still need to merge only 32 lanes. Imaging the backup. Remember the cpu can only accept 32 bits of data at once. Imagine 64 car lanes going 50mph before the
423:
Wouldn't dual channel architecture effectively only have any benefit if the two channels are used separately? Meaning that a single application could never profit from more than 1 channel? Or that if two applications intensively use memory that is allocated on one bank, they effectively do not have
647:
need 72-pin SIMMs in pairs to match their bus, and RAM in the same class as PC-100 provided a 64-bit bus (which would have required 4 matching 72-pin SIMMs if they hadn't changed things around again). This is what 8Mx64 means on a 64MB PC-100 DIMM. (And wasn't it 30 pins not 32 pins on the older
452:
What's a highway? To use the Raid0 analogy: In RAID0, the data is written to the two disks in parallel, so disk one contains blocks 1,3,5 while disk2 contains 0,2,4 for example. That way, Raid0 can double the throughput because no file above a certain size can be located on one disc alone. Is dual
1440:
It could be envisioned in the same way RAID 0 works, when compared to JBOD. With RAID 0 (ganged mode), it's up to the additional logic layer to provide better (ideally even) usage of all available hardware units (HDDs, memory banks); with JBOD (unganged mode) it's relied on the statictical usage
1406:
This section mentions that unganged multi-channel memory could be used to speed up multi-threading, but since a multi-threading program shares a common virtual address space, how would a multi-threading application allocate memory so that the physical pages of memory ended up on separate unganged
1033:
Memory interleaving is a technical term that means the memory controller stores one block of data in the first module, then the next block in the next module, and so on, until it gets back to the first module. It only makes sense to do if you can access multiple modules in parallel, which usually
1010:
Can anyone explain the difference between these 2? They all increase memory bandwidth. I understand that interleaving helps accessing contiguous memory locations due to locality by overlapping CAS. Dual channel seems to imply no interleaving, but parallel access to both memory modules. This would
913:
If the motherboard has two pairs of differently coloured DIMM sockets (the colours indicate which bank they belong to, bank 0 or bank 1), then one can place a matched pair of memory modules in bank 0, but a different-capacity pair of modules in bank 1, as long as they are of the same speed. Using
636:
Dual Channel could be implemented as interleaved memory but it probably isn't. I've never seen any non-consumer paper on this so I can't say how it works. But one way dual channel could work is akin to 386sx where more data is read at once then can be transmitted (16 bytes is read in 8 ns, and
596:
That kingston whitepaper is very overhyped. The idea of using more than one bank of memory has been around for at least a decade. Memory has been too slow for processor for even loger than that. My Indigo2 which was built in 1996 had 4 memory banks (up to 3 SIMMS per bank). My HP J210 also had 4
1592:
sporadic system crashes I'd been having, although this may have been something that had settled into an empty slot and was just barely shorting a couple of pins. Who knows. Anyway this is wikipedia so the arbitrary crap I did to my computers over the years is "original research", but I'm just
520:
That Intel Whitepaper is Hogwash. Comparing 1X256MB single-channel with 2X256MB single-channel is dumb: of course the system with more memory will perform better because there will be less page faults. The comparison should have been between a 1X512MB system and a 2X256MB system. Maybe Tom's
1153:
benefits against the advertising claims and theory, which although true, are meaningless. In other words, your gain of ~50% memory bandwidth results in a gain of only +5% frames per second when running an actual game. That is the point of the Tom's article. Nobody disputed the benchmark scores
471:
have a big impact. Hypertransport with dual channels (2 channels per CPU) has the potential to be four times faster than a single-CPU system with the single-channel memory architecture. By the way, do not copy this to the article without checking it, because I'm not sure that the details are
1117:
i can't believe you people take tom's hardware's garbage as a source for performance examples. I put it simple, I run everest 5.5 in DDR2-1066 single channel and I get 5970MB/sec read, I put it in dual channel (G.skill 2x2GB PI Edition) and I get 7870MB there's a difference please take a look
628:
I don't know if there is RAS latency in computer memory. Memory speed is given according to RAS. That is a 10ns memeory has a RAS of 10ns because RAS is the clock strobe. If CPU and RAM ran at the same speed then waiting will only occur on CAS. There'll be no waiting for RAS in this
442:
All applications benefit from dual-channel. Imagine heavy traffic on two highways, one with 64 lanes and one with 128 lanes (Dual-channel), the traffic on the highway with 128 lanes has less problems to go through than on the smaller highway.
753:
This article is littered with misabbreviations. I once corrected them, however they were reverted, and the guilty party insisted that "GiB" was the correct abbreviation for gigabytes. Was there some inverse revolution where letters were
153: 684:
There is no memory supporting only single-channel or only dual-channel. All memory sticks may work in dual-channel if you have a pair of two. Best is is identical modules or at least two similar ones (speed grade, memory density).
1279: 806:
The maximum memory transfer rate is halved. Usually system performance does not decrease at the same rate but it depends on the board/CPU how much it decreases (a P4 is severely affected but a Core/Athlon64 not that hard).
463:
booth, 32 cars lanes going 200mph after the booth and 128 lanes going 50mph before the booth and 32 lanes going 400mph after the booth (dual channel). And yes the data is striped as RAID 0. --NYC 1:06p, 20 Dec 2006 (EST)
1140:
You're measuring the transfer rate using a benchmarking tool designed specifically to maximize transfer rate, i.e. the same assumption that manufacturers use in order to inflate the appearance of performance gain. A
1703:
Most of the time a computer will be reading far more than 4k of data from RAM into cache at once, there's prefetch involved for most reads. On Threadripper PRO and Epyc the interleaving can be controlled in UEFI
699:
Some retail boxed ram, especialy ones from Kingston, sometimes have a "Not Dual-Channel Compatible" warning label on them. I remember occasionally seeing this when I worked for <popular electronics store:
1076: 1118:
somewhere serious like xbitlabs or do your own measures with your own computers. hope it changes before i change the article removing tom's hardware's garbage from a serious website like Knowledge
701:. Keep in mind that these usually still work but, as mentioned in this Wiki article, some motherboards may have issues with them. My P4 board with an SIS chipset had no problem dual channel-ing them. 410:
Would it be wrong to think of dual channel memory as an analogous setup to two hard drives configured to RAID 0? You double the speed by splitting the bandwidth costs over two different mediums?
727:
Why should the speed be cut in half ? In Dual-channel both PC-3200 sticks are still operating at 200 MHz as long as they are compatible to each other and the memory controller likes them, too. --
991:
Aside from the terrible grammar and misspelled words, I felt that the lack of evidence and weasel words necessitated the removal until someone can write it up better and provide actual evidence
1145:(i.e. software that is not a benchmarking tool) generally cannot be optimized in such a way because they are more constrained by the CPU, I/O, or any number of other bottlenecks. You miss the 711: 548:
You should identify that the reason they need to match is because they will be run in sync, and most BIOSes will run them both at the speed of DIMM 0, rather than the fastest compatible speed
147: 871:
banks, not matching banks. Thus, in a 3-bank configuration where 2 banks are blue and one is black, dual channel setup would require 1 module in a blue slot and 1 module in the black.
633:
16 bits, SIMMs had to be added in 2s. This was the days of 32 pin SIMMs. SIMMs 72 pins and greater offer atleast 32 bit wide data so today we no longer need to add SIMM in pairs.
1133: 1112: 721: 1761: 1605: 1183: 320: 1088: 1383:
It would be useful to have an updated version with the salient bottleneck issue. A possible separate pair of images showing single channel/dual channel would also help.
987:
But still there where numerous reports of users who've felt a performance boost from dual-channel. Some users reported a boost circa 70% in comparison to single-channel.
894: 618:
is innacurate. Even a CPU bus clocked at the same speed as the RAM would end up waiting, because SDRAM takes multiple cycles to read (because of RAS and CAS latency). --
972: 1361: 1129: 1026: 1547: 1332: 1066: 1022: 968: 44: 1693: 1673: 1656: 1427: 539:
Internal structure, speed rating and capacity should match, no need to have identical pairs although there are usually less problems with two identical sticks. --
1467:
and went through checking for supporting families. Everything should be updated until the E7 series is launched this fall. The AMD list could use updating too. --
601:
as they now seem to call it, is not something that was invented because DDR was too slow. Remember how annoying it was when your Pentium required SIMMs in pairs?
1668:
Edit-I doubt it would make sense to be 4K-page sized because the very common case of linear reads would gain next to nothing, so I guess the smaller the better?
898: 1422:
In unganged mode, a core is assigned to use only one of the memory buses. A thread will only run on one core, so it doesn't have to worry about multiple buses.
1267: 1163: 875: 1720: 1651:
Hi, this seems not given AFAICS. Is it per 64 bits (width of a DRAM bus), or a cache line perhaps (typically 512 bits), or per 4K page, or something else? TIA
931: 695:
Thank you for the response, I now understand the concept of dual-channel operation. However it should be added to the main article for other people's benefit.
496: 381:
Good work. The graphic could use some work, maybe to show dual vs single: basically, a wider pipe to memory in the dual channel scenario. I like it though.
1697: 1677: 513: 1005: 783: 580: 738:
Your memory would only down-clock if you mixed it with a slower chip. For example, if you put in a PC-2100 chip in there, all your ram would run at 133 MHz.
667: 1395: 797: 739: 702: 1213: 817: 742: 395: 776: 529:
As per the entry, "Each memory module in each slot should be identical to the one in its matching slot." Why is that? What if they aren't identical? --
1486:
I think there should be something in the article about the first dual channel chipsets. If I remember correctly, the first Dual Channel chipset was the
1450: 1431: 1230: 385: 301: 1756: 1311: 325: 1624: 1058: 1043: 476: 1580: 914:
this scheme, a pair of 1 GiB memory modules in bank 0 and a pair of matched 512 MiB modules in bank 1 would be acceptable for dual-channel operation.
837: 800: 731: 457: 1526: 1463:
I have edited the list of quad-channel supporting CPUs to include more 2nd and 3rd gen. chips and 4th gen. Haswell chips. I pulled everything from
1365: 923: 543: 1660: 652: 568: 863:
I don't know if this is universal or not, but the sentence above is misleading. Colored banks can (always?) belong to the same channel -- to get
842: 579:
there should be mention of the 480 pins that dual channel DDR-II requires? Unfortunately my wordcraft skills aren't particularily high today. --
447: 168: 1551: 1499: 1336: 454: 425: 640:
Note only sequential access is speed up by interleaving or dual channel. Random access gains little or no improvement. -NYC Dec 20, 2006 EST
135: 530: 1709:... This determines the starting address of the interleave (bit 8, 9, 10 or 11). The options are 256 Bytes, 512 Bytes, 1 KB, 2 KB and Auto. 533: 499: 1751: 1416: 824: 680:
Do dual channel motherboards accept memory which doesn't support dual channel? i.e. is it backward compatible? Would be handy to know...
1507: 1476: 1195: 364:
I started the article, as it was requested. I'm not extremely familiar with the intracite architecture, so more detail would be good.
1746: 1596:
My vote would be to delete the whole section, the requirements there are barely exist and are too complicated to really cover here. --
291: 79: 786: 1107: 1011:
explain why it generally doesn't improve performance. It wouldn't speed up contiguous accesses up to the virtual memory page size.
583: 762: 129: 705: 418: 1000: 609: 509:
supposed to just make stuff up and use the power of positive thinking to make it so?? :P Ya, someone should look into this. --
1441:
patterns to ensure even usage of all available hardware units. I'll try to find a reference, and add this into the article. —
125: 890: 85: 689: 1572: 1357: 1273: 1238:. Since there haven't been any No's so far and since this seems like a good thing to do, I am going to go ahead and do it. 1125: 622: 372:
Hello. Updated to the best of my ability; added the graphic. Will add a fuller of explanation of Intel vs. AMD later. --
267: 1741: 1543: 1328: 1018: 964: 791: 376: 175: 1278:
The article is need of improvement. Any experts on the subject are welcome to add info to this article. For others, the
1689: 1669: 1652: 1556: 1423: 1187: 907: 30: 554:
None of this article is particularly true on consumer boards any more, where the RAM sold is often fairly terrible in
1716: 1646: 1601: 1054: 663: 564: 24: 1342: 1307: 1263: 524: 1766: 250: 227: 99: 1458: 1401: 141: 104: 20: 1532: 1316: 957:
The SDRAMs' address and control pins are shared between channel 0 and channel 1. A diagram would help here.
74: 1513:
Hello! That sounds like a good suggestion; any chances, please, for providing a few references for that? —
1712: 1597: 1522: 1481: 1050: 659: 560: 202: 1186:" or similar. It seems that having seperate articles is redundant, as the two concepts are very similar. 1175: 65: 1576: 405: 796:
If you only use one stick of DDR2, is the speed effectively halved because it can't use dual channel?
1179: 1295: 1251: 266:
on Knowledge. If you would like to participate, please visit the project page, where you can join
1169: 109: 391:
of the term "bottleneck", actually, as a bottleneck is a slow point between two fast points. --
1221:. The technology behind this is similar, and most of the content will overlap in each article. 1191: 1159: 1084: 978: 490: 263: 591: 1093: 208: 1563:
https://www.intel.com/content/www/us/en/support/articles/000005657/boards-and-kits.html#dual
1348:
peripherals. Please help me out but isn't the bottleneck in numbercrunching usally CPU-: -->
1685: 1568: 1353: 1324: 1121: 1103: 1014: 960: 886: 813: 1518: 1446: 1098:
Now that everyone is installing 4 slots of memories, is there any quad channel mobo out?
8: 1283: 1239: 190: 55: 1562: 1537:
Please validate these information. I have not the technical knowledge to do it myself.
1209: 748: 70: 1538: 1503: 1391: 1155: 1080: 920: 872: 759: 573: 51: 1412: 1226: 1072: 996: 675: 382: 365: 161: 1472: 1099: 834: 883:
At least in my gigabyte mobo, it is recommed to set in the SAME colored slots.
1282:
on Dual Channel memory could be used for reference for improving this article.
649: 606: 510: 473: 392: 1735: 1205: 1075:
concerning the continued deprecation of IEC prefixes. Please comment at the
1039: 769: 359: 758:
to abbreviations, or does this other user not know what he's talking about?
352:
The section or sections that need attention may be noted in a message below.
1561:
The requirements "data" given in the article do NOT match what Intel says:
1386: 808: 773: 728: 718: 686: 540: 444: 373: 944:"Channel 1" refers to all memory connected to controller data bits 127:64. 1514: 1442: 1408: 1222: 992: 717:
I've read before that the speed drops in half (100 MHz in this example).
1464: 950:"Rank 1" refers to all memory connected to controller chip select CS1_N. 947:"Rank 0" refers to all memory connected to controller chip select CS0_N. 941:"Channel 0" refers to all memory connected to controller data bits 63:0. 1468: 856: 831: 597:
banks, and would do 16 way interleaving if it had 16 identical SIMMS.
495:
Some real life data (benchmarks etc.) would be interesting to see. --
1491: 1487: 852: 453:
channel working like this, with regard to the fragmentation of data?
259: 782:
Yup, the boy don't know what's the difference between MB's and GB's
1035: 619: 415: 310: 255: 242: 221: 867:-channel working correctly, paired modules should be installed to 348:
This topic is in need of attention from an expert on the subject
1495: 616:
any CPU with a bus speed that is greater than the memory speed,
937:(iii) bank address, (iv) row-address and (v) column-address: 712:
I came here for clock speed information and couldn't find it
472:
correct. In fact there's probably at least one mistake. --
1113:
about the measures in performance gain with dual channel
160: 1762:
C-Class Computer hardware articles of Low-importance
859:
memory modules must be installed into matching banks
254:, a collaborative effort to improve the coverage of 15: 1539:
http://www.mersenneforum.org/showthread.php?t=20575
1682:I'd really appreciate some info on this, please! 1067:Should MOSNUM continue to deprecate IEC prefixes? 1733: 1625:"Intel Core i7-6xxx-LGA2011-v3 DataSheet v1.pdf" 953:"Bank 7" refers to memory addressed when BA=111. 33:for general discussion of the article's subject. 605:My 486 required 30-pin SIMMs in quads. :) -- 1006:Difference between Interleaving, Dual Channel 983:I've removed this line from actual results: 174: 521:Hardware or Sisoft have benchmark results. 424:any profit from dual channel architecture? 1149:of the Tom's article, which was to verify 1757:Low-importance Computer hardware articles 1321:What happened with quad-channel memory? 1565:So who is right-- the author or Intel? 843:Modules should be in non-matching banks 188: 1734: 1494:Dual Channel chipset was the nVidia 932:Further ambiguity of the term "bank" 341: 248:This article is within the scope of 184: 825:Inadequate information and sourcing 414:Seems so, although I am not sure. - 207:It is of interest to the following 23:for discussing improvements to the 13: 1752:C-Class Computer hardware articles 309: 14: 1778: 1747:Mid-importance Computing articles 1182:and that the article be renamed " 1071:A discussion has been started at 25:Multi-channel memory architecture 1374: 241: 220: 189: 45:Click here to start a new topic. 296:This article has been rated as 276:Knowledge:WikiProject Computing 1617: 1477:23:47, 30 September 2014 (UTC) 1372:Commented out so not visible. 1231:15:06, 20 September 2011 (UTC) 838:03:45, 20 September 2007 (UTC) 818:15:22, 17 September 2007 (UTC) 801:10:24, 17 September 2007 (UTC) 787:15:36, 14 September 2007 (UTC) 279:Template:WikiProject Computing 1: 1698:13:21, 16 February 2022 (UTC) 1527:19:08, 27 November 2014 (UTC) 1508:14:09, 25 November 2014 (UTC) 1044:00:04, 29 February 2012 (UTC) 777:13:47, 9 September 2007 (UTC) 763:20:16, 7 September 2007 (UTC) 419:12:13, 14 December 2005 (UTC) 318:This article is supported by 270:and see a list of open tasks. 42:Put new text under old text. 1678:10:25, 15 October 2021 (UTC) 1661:18:48, 10 October 2021 (UTC) 1606:09:18, 9 December 2020 (UTC) 1396:14:14, 25 October 2012 (UTC) 1274:Article needs to be improved 1196:18:15, 18 January 2011 (UTC) 1164:05:59, 12 October 2010 (UTC) 924:18:15, 26 October 2007 (UTC) 876:06:11, 26 October 2007 (UTC) 690:09:55, 1 February 2007 (UTC) 653:02:24, 20 January 2007 (UTC) 623:05:59, 24 October 2006 (UTC) 610:02:11, 20 January 2007 (UTC) 514:02:28, 20 January 2007 (UTC) 477:02:02, 20 January 2007 (UTC) 458:08:28, 9 February 2006 (UTC) 448:19:48, 8 February 2006 (UTC) 396:02:09, 20 January 2007 (UTC) 321:Computer hardware task force 7: 1451:01:20, 9 January 2014 (UTC) 1432:01:08, 9 January 2014 (UTC) 1337:10:59, 6 January 2012 (UTC) 1312:08:06, 1 October 2011 (UTC) 1268:07:01, 1 October 2011 (UTC) 1214:17:33, 31 August 2011 (UTC) 1176:Triple-channel architecture 1108:04:16, 15 August 2008 (UTC) 973:16:45, 5 January 2011 (UTC) 792:Dual Channel with one stick 743:21:34, 20 August 2007 (UTC) 706:21:32, 20 August 2007 (UTC) 50:New to Knowledge? Welcome! 10: 1783: 1742:C-Class Computing articles 1581:18:49, 24 April 2020 (UTC) 1557:Multi-Channel Requirements 1552:18:30, 14 April 2017 (UTC) 1184:Multi-channel architecture 1134:11:52, 5 August 2010 (UTC) 1001:17:45, 28 April 2008 (UTC) 848:In order to achieve this, 500:16:57, 12 April 2006 (UTC) 302:project's importance scale 1721:12:47, 20 June 2023 (UTC) 1647:Interleaving granularity? 1180:Dual-channel architecture 1059:12:41, 20 June 2023 (UTC) 1027:10:15, 24 June 2008 (UTC) 899:15:38, 6 April 2009 (UTC) 668:12:31, 20 June 2023 (UTC) 584:23:36, 23 June 2006 (UTC) 569:12:21, 20 June 2023 (UTC) 434:6, 8 February 2006 (UTC) 386:03:10, 12 July 2005 (UTC) 377:20:54, 11 July 2005 (UTC) 368:00:44, 12 Jun 2005 (UTC) 317: 295: 236: 215: 80:Be welcoming to newcomers 1490:(11/2000) and the first 1417:03:03, 23 May 2013 (UTC) 1366:10:54, 3 July 2012 (UTC) 1343:Bottleneck What the Hell 1280:German Knowledge article 1089:19:13, 5 July 2008 (UTC) 732:11:38, 30 May 2007 (UTC) 722:22:50, 29 May 2007 (UTC) 544:16:11, 11 May 2006 (UTC) 534:03:49, 11 May 2006 (UTC) 525:Matching pairs of memory 1713:A Shortfall Of Gravitas 1598:A Shortfall Of Gravitas 1051:A Shortfall Of Gravitas 660:A Shortfall Of Gravitas 561:A Shortfall Of Gravitas 1767:All Computing articles 1711: 1459:Quad-Channel Haswell-E 1402:ganged versus unganged 1151:real-world performance 1143:real-world application 989: 314: 264:information technology 197:This article is rated 75:avoid personal attacks 1707: 1533:Found some benchmarks 1317:X79 and quad-channel? 985: 816:comment was added at 313: 251:WikiProject Computing 100:Neutral point of view 1482:Dual Channel History 105:No original research 643:Actually, Pentiums 814:signed but undated 406:RAID 0 for memory? 315: 282:Computing articles 203:content assessment 86:dispute resolution 47: 1700: 1688:comment added by 1583: 1571:comment added by 1356:comment added by 1327:comment added by 1124:comment added by 1029: 1017:comment added by 963:comment added by 889:comment added by 820: 357: 356: 340: 339: 336: 335: 332: 331: 183: 182: 66:Assume good faith 43: 1774: 1683: 1639: 1638: 1636: 1634: 1629: 1621: 1566: 1394: 1389: 1382: 1378: 1377: 1368: 1339: 1300: 1291: 1287: 1256: 1247: 1243: 1136: 1077:MOSNUM talk page 1012: 975: 908:Misuse of "bank" 901: 811: 342: 284: 283: 280: 277: 274: 245: 238: 237: 232: 224: 217: 216: 200: 194: 193: 185: 179: 178: 164: 95:Article policies 16: 1782: 1781: 1777: 1776: 1775: 1773: 1772: 1771: 1732: 1731: 1649: 1644: 1643: 1642: 1632: 1630: 1627: 1623: 1622: 1618: 1559: 1535: 1484: 1461: 1404: 1385: 1384: 1375: 1373: 1351: 1345: 1322: 1319: 1298: 1289: 1285: 1276: 1254: 1245: 1241: 1178:be merged into 1174:I propose that 1172: 1170:Merger proposal 1119: 1115: 1096: 1069: 1008: 981: 958: 934: 910: 891:200.108.215.226 884: 845: 827: 794: 751: 714: 678: 594: 576: 527: 497:213.253.102.145 493: 408: 362: 353: 281: 278: 275: 272: 271: 230: 201:on Knowledge's 198: 121: 116: 115: 114: 91: 61: 12: 11: 5: 1780: 1770: 1769: 1764: 1759: 1754: 1749: 1744: 1730: 1729: 1728: 1727: 1726: 1725: 1724: 1723: 1705: 1648: 1645: 1641: 1640: 1615: 1614: 1610: 1609: 1608: 1594: 1573:24.241.152.175 1558: 1555: 1534: 1531: 1530: 1529: 1483: 1480: 1460: 1457: 1456: 1455: 1454: 1453: 1435: 1434: 1403: 1400: 1399: 1398: 1358:92.205.102.196 1344: 1341: 1318: 1315: 1275: 1272: 1271: 1270: 1233: 1216: 1171: 1168: 1167: 1166: 1126:190.31.175.252 1114: 1111: 1095: 1092: 1068: 1065: 1064: 1063: 1062: 1061: 1007: 1004: 980: 979:Actual results 977: 955: 954: 951: 948: 945: 942: 933: 930: 928: 909: 906: 905: 904: 903: 902: 844: 841: 826: 823: 822: 821: 793: 790: 784:203.81.161.154 780: 779: 750: 747: 746: 745: 735: 734: 713: 710: 709: 708: 693: 692: 677: 674: 673: 672: 671: 670: 655: 638: 634: 630: 613: 612: 593: 590: 588: 581:222.155.100.80 575: 572: 552: 551: 550: 549: 526: 523: 519: 517: 516: 492: 491:Practical data 489: 488: 487: 486: 485: 484: 483: 482: 481: 480: 479: 460: 432: 431: 430: 429: 407: 404: 403: 402: 401: 400: 399: 398: 361: 358: 355: 354: 351: 345: 338: 337: 334: 333: 330: 329: 326:Low-importance 316: 306: 305: 298:Mid-importance 294: 288: 287: 285: 268:the discussion 246: 234: 233: 231:Mid‑importance 225: 213: 212: 206: 195: 181: 180: 118: 117: 113: 112: 107: 102: 93: 92: 90: 89: 82: 77: 68: 62: 60: 59: 48: 39: 38: 35: 34: 28: 9: 6: 4: 3: 2: 1779: 1768: 1765: 1763: 1760: 1758: 1755: 1753: 1750: 1748: 1745: 1743: 1740: 1739: 1737: 1722: 1718: 1714: 1710: 1706: 1702: 1701: 1699: 1695: 1691: 1687: 1681: 1680: 1679: 1675: 1671: 1667: 1666: 1665: 1664: 1663: 1662: 1658: 1654: 1626: 1620: 1616: 1613: 1607: 1603: 1599: 1595: 1591: 1586: 1585: 1584: 1582: 1578: 1574: 1570: 1564: 1554: 1553: 1549: 1545: 1544:93.88.103.166 1541: 1540: 1528: 1524: 1520: 1516: 1512: 1511: 1510: 1509: 1505: 1501: 1498:(06/2001). -- 1497: 1493: 1489: 1479: 1478: 1474: 1470: 1466: 1452: 1448: 1444: 1439: 1438: 1437: 1436: 1433: 1429: 1425: 1421: 1420: 1419: 1418: 1414: 1410: 1397: 1393: 1388: 1381: 1371: 1370: 1369: 1367: 1363: 1359: 1355: 1340: 1338: 1334: 1330: 1329:80.187.151.86 1326: 1314: 1313: 1309: 1306: 1303: 1302: 1301: 1293: 1292: 1281: 1269: 1265: 1262: 1259: 1258: 1257: 1249: 1248: 1237: 1234: 1232: 1228: 1224: 1220: 1217: 1215: 1211: 1207: 1203: 1200: 1199: 1198: 1197: 1193: 1189: 1185: 1181: 1177: 1165: 1161: 1157: 1152: 1148: 1144: 1139: 1138: 1137: 1135: 1131: 1127: 1123: 1110: 1109: 1105: 1101: 1094:Quad channel? 1091: 1090: 1086: 1082: 1078: 1074: 1060: 1056: 1052: 1047: 1046: 1045: 1041: 1037: 1032: 1031: 1030: 1028: 1024: 1020: 1019:128.61.49.181 1016: 1003: 1002: 998: 994: 988: 984: 976: 974: 970: 966: 965:62.254.223.97 962: 952: 949: 946: 943: 940: 939: 938: 929: 926: 925: 922: 916: 915: 900: 896: 892: 888: 882: 881: 880: 879: 878: 877: 874: 870: 866: 861: 860: 858: 854: 851: 840: 839: 836: 833: 819: 815: 810: 805: 804: 803: 802: 799: 798:91.84.211.193 789: 788: 785: 778: 775: 771: 770:Binary prefix 767: 766: 765: 764: 761: 757: 744: 741: 740:66.177.213.82 737: 736: 733: 730: 726: 725: 724: 723: 720: 707: 704: 703:66.177.213.82 698: 697: 696: 691: 688: 683: 682: 681: 669: 665: 661: 656: 654: 651: 646: 642: 641: 639: 635: 631: 627: 626: 625: 624: 621: 617: 611: 608: 604: 603: 602: 600: 589: 586: 585: 582: 571: 570: 566: 562: 557: 547: 546: 545: 542: 538: 537: 536: 535: 532: 522: 515: 512: 508: 504: 503: 502: 501: 498: 478: 475: 470: 465: 464: 461: 459: 456: 451: 450: 449: 446: 441: 440: 439: 438: 437: 436: 435: 427: 422: 421: 420: 417: 413: 412: 411: 397: 394: 389: 388: 387: 384: 380: 379: 378: 375: 371: 370: 369: 367: 349: 346: 344: 343: 327: 324:(assessed as 323: 322: 312: 308: 307: 303: 299: 293: 290: 289: 286: 269: 265: 261: 257: 253: 252: 247: 244: 240: 239: 235: 229: 226: 223: 219: 218: 214: 210: 204: 196: 192: 187: 186: 177: 173: 170: 167: 163: 159: 155: 152: 149: 146: 143: 140: 137: 134: 131: 127: 124: 123:Find sources: 120: 119: 111: 110:Verifiability 108: 106: 103: 101: 98: 97: 96: 87: 83: 81: 78: 76: 72: 69: 67: 64: 63: 57: 53: 52:Learn to edit 49: 46: 41: 40: 37: 36: 32: 26: 22: 18: 17: 1708: 1690:2.103.30.137 1684:— Preceding 1670:92.6.102.193 1653:79.66.198.78 1650: 1631:. Retrieved 1619: 1611: 1589: 1567:— Preceding 1560: 1542: 1536: 1485: 1462: 1424:65.87.26.124 1405: 1379: 1352:— Preceding 1346: 1323:— Preceding 1320: 1304: 1297: 1296: 1284: 1277: 1260: 1253: 1252: 1240: 1235: 1218: 1201: 1188:98.103.186.3 1173: 1156:Ham Pastrami 1154:themselves. 1150: 1147:entire point 1146: 1142: 1116: 1097: 1081:Thunderbird2 1070: 1009: 990: 986: 982: 956: 935: 927: 921:Ham Pastrami 917: 912: 911: 873:Ham Pastrami 868: 864: 862: 849: 847: 846: 828: 795: 781: 760:Seedsoflight 755: 752: 715: 694: 679: 644: 615: 614: 599:Dual Channel 598: 595: 587: 577: 555: 553: 528: 518: 506: 494: 468: 433: 409: 363: 347: 319: 297: 249: 209:WikiProjects 171: 165: 157: 150: 144: 138: 132: 122: 94: 19:This is the 1120:—Preceding 1013:—Preceding 959:—Preceding 885:—Preceding 850:two or more 812:—Preceding 592:What's new? 383:DoomBringer 366:DoomBringer 148:free images 31:not a forum 1736:Categories 1633:9 December 1612:References 1100:Lightblade 857:DDR2 SDRAM 749:MB or MiB? 1704:settings: 1492:DDR-SDRAM 1488:Intel 850 1349:RAM-: --> 1073:WP:MOSNUM 650:Bilbo1507 607:Bilbo1507 574:Criticism 511:Bilbo1507 505:Oh we're 474:Bilbo1507 393:Bilbo1507 273:Computing 260:computing 256:computers 228:Computing 88:if needed 71:Be polite 21:talk page 1686:unsigned 1569:unsigned 1523:contribs 1354:unsigned 1325:unsigned 1206:Jimthing 1122:unsigned 1015:unsigned 961:unsigned 887:unsigned 869:opposite 676:Question 428:19:0wwww 56:get help 29:This is 27:article. 1500:MrBurns 1387:Widefox 809:Denniss 774:Denniss 729:Denniss 719:Modul8r 687:Denniss 541:Denniss 455:Dabljuh 445:Denniss 426:Dabljuh 300:on the 199:C-class 154:WP refs 142:scholar 1590:solved 1515:Dsimic 1496:nForce 1443:Dsimic 1409:Rcgldr 1350:CPU. 1223:GL1zdA 993:Kakomu 835:(talk) 832:Jeff Q 531:mriker 374:Bobcat 262:, and 205:scale. 126:Google 1628:(PDF) 1469:Jsd45 1290:Menon 1286:Arjun 1246:Menon 1242:Arjun 1219:Agree 1202:Agree 756:ADDED 629:case. 360:Start 169:JSTOR 130:books 84:Seek 1717:talk 1694:talk 1674:talk 1657:talk 1635:2020 1602:talk 1577:talk 1548:talk 1519:talk 1504:talk 1473:talk 1447:talk 1428:talk 1413:talk 1392:talk 1380:Done 1362:talk 1333:talk 1308:mail 1299:talk 1264:mail 1255:talk 1236:Okay 1227:talk 1210:talk 1192:talk 1160:talk 1130:talk 1104:talk 1085:talk 1055:talk 1040:talk 1023:talk 997:talk 969:talk 895:talk 865:dual 768:See 664:talk 565:talk 556:real 469:does 162:FENS 136:news 73:and 1465:ARK 1288:G. 1244:G. 1036:Aij 853:DDR 700:--> 645:did 620:Aij 507:not 416:Yyy 292:Mid 176:TWL 1738:: 1719:) 1696:) 1676:) 1659:) 1604:) 1579:) 1550:) 1525:) 1521:| 1506:) 1475:) 1449:) 1430:) 1415:) 1390:; 1364:) 1335:) 1310:) 1266:) 1229:) 1212:) 1194:) 1162:) 1132:) 1106:) 1087:) 1079:. 1057:) 1042:) 1025:) 999:) 971:) 897:) 807:-- 772:-- 685:-- 666:) 567:) 443:-- 328:). 258:, 156:) 54:; 1715:( 1692:( 1672:( 1655:( 1637:. 1600:( 1575:( 1546:( 1517:( 1502:( 1471:( 1445:( 1426:( 1411:( 1360:( 1331:( 1305:· 1294:( 1261:· 1250:( 1225:( 1208:( 1190:( 1158:( 1128:( 1102:( 1083:( 1053:( 1038:( 1021:( 995:( 967:( 893:( 855:/ 662:( 563:( 350:. 304:. 211:: 172:· 166:· 158:· 151:· 145:· 139:· 133:· 128:( 58:.

Index

talk page
Multi-channel memory architecture
not a forum
Click here to start a new topic.
Learn to edit
get help
Assume good faith
Be polite
avoid personal attacks
Be welcoming to newcomers
dispute resolution
Neutral point of view
No original research
Verifiability
Google
books
news
scholar
free images
WP refs
FENS
JSTOR
TWL

content assessment
WikiProjects
WikiProject icon
Computing
WikiProject icon
WikiProject Computing

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑