Dimm Memory Slots X4 Definition

Learning has never been so easy!
  • . The PCIEX8 slot shares bandwidth with the PCIEX16 slot. When the PCIEX8 slot is populated, the PCIEX16 slot operates at up to x8 mode. 1 x PCI Express x16 slot, running at x4 (PCIEX4). The PCIEX4 slot shares bandwidth with the M2P connector. The PCIEX4 slot operates at up to x2 mode when a PCIe SSD is installed in the M2P connector.
  • DDR4 is the most recent and has a bandwidth of 32 GB/s. Generally, it's the fastest type of DDR memory currently available and has the best performance and largest amount of memory.
Slots

When referring to memory, a memory bank or bank is the smallest amount of memory that can be addressed by the processor at one time. Below are the common configurations and explanations of banks relating to each of the major computer memory types. DIMM or RIMM memory modules can be installed individually, in a single slot.

There are some differences between UDIMMs and RDIMMs that are important in choosing the best options for memory performance. First, let’s talk about the differences between them.

RDIMMs have a register on-board the DIMM (hence the name “registered” DIMM). The register/PLL is used to buffer the address and control lines and clocks only. Consequently, none of the data goes through the register /PLL on an RDIMM (PLL is Phase Locked Loop. On prior generations (DDR2), the Register - for buffer the address and control lines - and the PLL for generating extra copies of the clock were separate, but for DDR3 they are in a single part).

It might be worth paying for decent internet access. No surprise since the comps are pitiful and the MS Coast isn't terribly far. It is very slow. If you use the free internet, be ready to wait for anything to load. Philadelphia ms casino golden moon.

There is about a one clock cycle delay through the register which means that with only one DIMM per channel, UDIMMs will have slightly less latency (better bandwidth). But when you go to 2 DIMMs per memory channel, due to the high electrical loading on the address and control lines, the memory controller use something called a “2T” or “2N” timing for UDIMMs.

Consequently every command that normally takes a single clock cycle is stretched to two clock cycles to allow for settling time. Therefore, for two or more DIMMs per channel, RDIMMs will have lower latency and better bandwidth than UDIMMs.

Based on guidance from Intel and internal testing, RDIMMs have better bandwidth when using more than one DIMM per memory channel (recall that Nehalem has up to 3 memory channels per socket). But, based on results from Intel, for a single DIMM per channel, UDIMMs produce approximately 0.5% better memory bandwidth than RDIMMs for the same processor frequency and memory frequency (and rank). For two DIMMs per channel, RDIMMs are about 8.7% faster than UDIMMs.

For the same capacity, RDIMMs will be require about 0.5 to 1.0W per DIMM more power due to the Register/PLL power. The reduction in memory controller power to drive the DIMMs on the channel is small in comparison to the RDIMM Register/PLL power adder.

Dimm Memory Slots X4 Definition Free

RDIMMs also provide an extra measure of RAS. They provide address/control parity detection at the Register/PLL such that if an address or control signal has an issue, the RDIMM will detect it and send a parity error signal back to the memory controller. It does not prevent data corruption on a write, but the system will know that it has occurred, whereas on UDIMMs, the same address/control issue would not be caught (at least not when the corruption occurs).

Another difference is that server UDIMMs support only x8 wide DRAMs, whereas RDIMMs can use x8 or x4 wide DRAMs. Using x4 DRAMs allows the system to correct all possible DRAM device errors (SDDC, or “Chip Kill”), which is not possible with x8 DRAMs unless channels are run in Lockstep mode (huge loss in bandwidth and capacity on Nehalem). So if SDDC is important, x4 RDIMMs are the way to go.

In addition, please note that UDIMMs are limited to 2 DIMMs per channel so RDIMMs must be used if greater than 2 DIMMs per channel (some of Dell’s servers will have 3 DIMMs per channel capability).
In summary the comparison between UDIMMs and RDIMMs is

Typically UDIMMs are a bit cheaper than RDIMMs
For one DIMM per memory channel UDIMMs have slightly better memory bandwidth than RDIMMs (0.5%)
For two DIMMs per memory channel RDIMMs have better memory bandwidth (8.7%) than UDIMMs
For the same capacity, RDIMMs will be require about 0.5 to 1.0W per DIMM than UDIMMs
RDIMMs also provide an extra measure of RAS
Address / control signal parity detection
RDIMMs can use x4 DRAMs so SDDC can correct all DRAM device errors even in independent channel mode
UDIMMs are currently limited to 1GB and 2GB DIMM sizes from Dell
UDIMMs are limited to two DIMMs per memory channel


DIMM Count and Memory Configurations

Recall that you are allowed up to 3 DIMMs per memory channel (i.e. 3 banks) per socket (a total of 9 DIMMs per socket). With Nehalem the actually memory speed depends upon the speed of the DIMM itself, the number of DIMMs in each channel, the CPU speed itself. Here are some simple rules for determining DIMM speed.

So Dimm Slots

If you put only 1 DIMM in each memory channel you can run the DIMMs at 1333 MHz (maximum speed). This assumes that the processor supports 1333 MHz (currently, the 2.66 GHz, 2.80 GHz, and 2.93 GHz processors support 1333 MHz memory) and the memory is capable of 1333 MHz
As soon as you put one more DIMM in any memory channel (two DIMMs in that memory channel) on any socket, the speed of the memory drops to 1066 MHz (basically the memory runs at the fastest common speed for all DIMMs)
As soon as you put more than two DIMMs in any one memory channel, the speed of all the memory drops to 800 MHz

Dimm Memory Slots X4 Definition 1

Dimm memory slots x4 definition math

So as you add more DIMMs to any memory channel, the memory speed drops. This is due to the electrical loading of the DRAMs that reduces timing margin, not power constraints.
If you don’t completely fill all memory channels there is a reduction in the memory bandwidth performance. Think of these configurations as “unbalanced” configurations from a memory perspective.

References

  • RDIMM vs UDIMM

1 Comment

  • Mace
    ErikN Oct 23, 2013 at 01:49pm

    That's a lot of good info. For the speed reader, it would be nice to have a synopsis at the end that gave guidance as to which to choose and why.

Troubleshooting a Multi-DIMM Failure State

A multi-DIMM failure state is when a single DIMM failure causes other DIMMs in the samechannel (or a second channel) on a memory riser card to become disabled or appear as if they havefailed.

When a DIMM failure occurs, check the Oracle ILOM system event log (SEL) to:

  • Identify the first DIMM that failed.

  • Note any additional DIMM failures occurring closely after the initial DIMM failure.

  • Identify the memory riser card that contains the failed DIMM.

  • Note the channel(s) in which any additional DIMM failures have occurred.

If another DIMM (or DIMMs) has failed after the initial occurrence of a single DIMM failure,and, if the DIMM failure is on the same memory riser card, then the server might be in multi-DIMMfailure state.

For example, you might see the following in the system error log:

Ddr4 Dimm Slots

In this scenario, the DIMM in D6 shows as failed at 00:53:56, and that failure is subsequentlyfollowed by a reported failure of the DIMM in D9, which failed at 00:53:57. Both DIMMs are on thesame memory riser card (P0/MR0). Each DIMM is on a separate channel, but both are linked to the samememory buffer ASIC. Additionally, all of the DIMMs in both channels might have been disabled by thesystem. This scenario could be an instance of a multi-DIMM failure state.

How to Troubleshoot a Multi-DIMM Failure State

To troubleshoot this issue, replace only the DIMM that logged the initial failure and returnthe server to operation to see if the multi-DIMM failure state persists. If a multi-DIMM failurestate had occurred, replacing only the DIMM that failed initially would rectify the fault state ofthe initial DIMMs and the subsequent DIMMs. If the failures persists, the issue could be with theDIMMs or with the memory riser card.